Test Report: Docker_Linux_crio 21512

                    
                      67b6671f4b7f755dd397ae36ae992d15d1f5bc42:2025-09-08:41332
                    
                

Test fail (20/325)

x
+
TestAddons/parallel/Ingress (151.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-960652 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-960652 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-960652 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [3e527626-603b-4c03-8575-4a6beb2298e9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [3e527626-603b-4c03-8575-4a6beb2298e9] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004149135s
I0908 11:36:45.831831  618620 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-960652 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-960652 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.662495108s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-960652 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-960652 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-960652
helpers_test.go:243: (dbg) docker inspect addons-960652:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "24f37931d688e060247598edc913e26a6e1a56e62b8bca494db47e4b663eef6c",
	        "Created": "2025-09-08T11:33:39.320264797Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 620519,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-08T11:33:39.358365869Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:863fa02c4a7dcd4571b30c16c1e6ae3eaa1d1a904931aac9472133ae3328c098",
	        "ResolvConfPath": "/var/lib/docker/containers/24f37931d688e060247598edc913e26a6e1a56e62b8bca494db47e4b663eef6c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/24f37931d688e060247598edc913e26a6e1a56e62b8bca494db47e4b663eef6c/hostname",
	        "HostsPath": "/var/lib/docker/containers/24f37931d688e060247598edc913e26a6e1a56e62b8bca494db47e4b663eef6c/hosts",
	        "LogPath": "/var/lib/docker/containers/24f37931d688e060247598edc913e26a6e1a56e62b8bca494db47e4b663eef6c/24f37931d688e060247598edc913e26a6e1a56e62b8bca494db47e4b663eef6c-json.log",
	        "Name": "/addons-960652",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-960652:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-960652",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "24f37931d688e060247598edc913e26a6e1a56e62b8bca494db47e4b663eef6c",
	                "LowerDir": "/var/lib/docker/overlay2/2dea81149420091d81a76eb8de5f710286b08de429c3af55a4042752a3c447f2-init/diff:/var/lib/docker/overlay2/e9bf8e09f770f60ed4c810e0202629dccf6a4c304822cce5a8dffd900ae12eff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2dea81149420091d81a76eb8de5f710286b08de429c3af55a4042752a3c447f2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2dea81149420091d81a76eb8de5f710286b08de429c3af55a4042752a3c447f2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2dea81149420091d81a76eb8de5f710286b08de429c3af55a4042752a3c447f2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-960652",
	                "Source": "/var/lib/docker/volumes/addons-960652/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-960652",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-960652",
	                "name.minikube.sigs.k8s.io": "addons-960652",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0aa3c1b7418107fc0ec83e378475db5d130bc5f9ca8c6af4ddb7d24724f95ec1",
	            "SandboxKey": "/var/run/docker/netns/0aa3c1b74181",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-960652": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:92:16:85:a6:a4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "45fdf7b9b342569c8c85c165d0b4cad936d009232c6be61e085655511c342d62",
	                    "EndpointID": "801af4b3003c918ad01636a6f4e3619c580ea9c686a26ac0443ff42863cdc68f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-960652",
	                        "24f37931d688"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-960652 -n addons-960652
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-960652 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-960652 logs -n 25: (1.255742122s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p download-docker-220243 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-220243 │ jenkins │ v1.36.0 │ 08 Sep 25 11:33 UTC │                     │
	│ delete  │ -p download-docker-220243                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-220243 │ jenkins │ v1.36.0 │ 08 Sep 25 11:33 UTC │ 08 Sep 25 11:33 UTC │
	│ start   │ --download-only -p binary-mirror-922073 --alsologtostderr --binary-mirror http://127.0.0.1:41749 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-922073   │ jenkins │ v1.36.0 │ 08 Sep 25 11:33 UTC │                     │
	│ delete  │ -p binary-mirror-922073                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-922073   │ jenkins │ v1.36.0 │ 08 Sep 25 11:33 UTC │ 08 Sep 25 11:33 UTC │
	│ addons  │ enable dashboard -p addons-960652                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-960652          │ jenkins │ v1.36.0 │ 08 Sep 25 11:33 UTC │                     │
	│ addons  │ disable dashboard -p addons-960652                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-960652          │ jenkins │ v1.36.0 │ 08 Sep 25 11:33 UTC │                     │
	│ start   │ -p addons-960652 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-960652          │ jenkins │ v1.36.0 │ 08 Sep 25 11:33 UTC │ 08 Sep 25 11:35 UTC │
	│ addons  │ addons-960652 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-960652          │ jenkins │ v1.36.0 │ 08 Sep 25 11:35 UTC │ 08 Sep 25 11:35 UTC │
	│ addons  │ addons-960652 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-960652          │ jenkins │ v1.36.0 │ 08 Sep 25 11:36 UTC │ 08 Sep 25 11:36 UTC │
	│ addons  │ enable headlamp -p addons-960652 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-960652          │ jenkins │ v1.36.0 │ 08 Sep 25 11:36 UTC │ 08 Sep 25 11:36 UTC │
	│ addons  │ addons-960652 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-960652          │ jenkins │ v1.36.0 │ 08 Sep 25 11:36 UTC │ 08 Sep 25 11:36 UTC │
	│ ssh     │ addons-960652 ssh cat /opt/local-path-provisioner/pvc-a4a34cf8-0045-4c4f-ba4a-0035da17388c_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-960652          │ jenkins │ v1.36.0 │ 08 Sep 25 11:36 UTC │ 08 Sep 25 11:36 UTC │
	│ addons  │ addons-960652 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-960652          │ jenkins │ v1.36.0 │ 08 Sep 25 11:36 UTC │ 08 Sep 25 11:36 UTC │
	│ addons  │ addons-960652 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-960652          │ jenkins │ v1.36.0 │ 08 Sep 25 11:36 UTC │ 08 Sep 25 11:36 UTC │
	│ ip      │ addons-960652 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-960652          │ jenkins │ v1.36.0 │ 08 Sep 25 11:36 UTC │ 08 Sep 25 11:36 UTC │
	│ addons  │ addons-960652 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-960652          │ jenkins │ v1.36.0 │ 08 Sep 25 11:36 UTC │ 08 Sep 25 11:36 UTC │
	│ addons  │ addons-960652 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-960652          │ jenkins │ v1.36.0 │ 08 Sep 25 11:36 UTC │ 08 Sep 25 11:36 UTC │
	│ addons  │ addons-960652 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-960652          │ jenkins │ v1.36.0 │ 08 Sep 25 11:36 UTC │ 08 Sep 25 11:36 UTC │
	│ addons  │ addons-960652 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-960652          │ jenkins │ v1.36.0 │ 08 Sep 25 11:36 UTC │ 08 Sep 25 11:36 UTC │
	│ addons  │ addons-960652 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-960652          │ jenkins │ v1.36.0 │ 08 Sep 25 11:36 UTC │ 08 Sep 25 11:36 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-960652                                                                                                                                                                                                                                                                                                                                                                                           │ addons-960652          │ jenkins │ v1.36.0 │ 08 Sep 25 11:36 UTC │ 08 Sep 25 11:36 UTC │
	│ addons  │ addons-960652 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-960652          │ jenkins │ v1.36.0 │ 08 Sep 25 11:36 UTC │ 08 Sep 25 11:36 UTC │
	│ addons  │ addons-960652 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-960652          │ jenkins │ v1.36.0 │ 08 Sep 25 11:36 UTC │ 08 Sep 25 11:36 UTC │
	│ ssh     │ addons-960652 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-960652          │ jenkins │ v1.36.0 │ 08 Sep 25 11:36 UTC │                     │
	│ ip      │ addons-960652 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-960652          │ jenkins │ v1.36.0 │ 08 Sep 25 11:38 UTC │ 08 Sep 25 11:38 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 11:33:14
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 11:33:14.318482  619897 out.go:360] Setting OutFile to fd 1 ...
	I0908 11:33:14.318736  619897 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:33:14.318745  619897 out.go:374] Setting ErrFile to fd 2...
	I0908 11:33:14.318750  619897 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:33:14.318954  619897 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-614854/.minikube/bin
	I0908 11:33:14.319579  619897 out.go:368] Setting JSON to false
	I0908 11:33:14.320529  619897 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":8138,"bootTime":1757323056,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 11:33:14.320645  619897 start.go:140] virtualization: kvm guest
	I0908 11:33:14.322603  619897 out.go:179] * [addons-960652] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 11:33:14.324029  619897 out.go:179]   - MINIKUBE_LOCATION=21512
	I0908 11:33:14.324043  619897 notify.go:220] Checking for updates...
	I0908 11:33:14.325571  619897 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 11:33:14.326863  619897 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21512-614854/kubeconfig
	I0908 11:33:14.328099  619897 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-614854/.minikube
	I0908 11:33:14.329398  619897 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 11:33:14.330822  619897 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 11:33:14.332263  619897 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 11:33:14.355754  619897 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 11:33:14.355870  619897 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 11:33:14.408213  619897 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:48 SystemTime:2025-09-08 11:33:14.397941321 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 11:33:14.408343  619897 docker.go:318] overlay module found
	I0908 11:33:14.410248  619897 out.go:179] * Using the docker driver based on user configuration
	I0908 11:33:14.411724  619897 start.go:304] selected driver: docker
	I0908 11:33:14.411746  619897 start.go:918] validating driver "docker" against <nil>
	I0908 11:33:14.411761  619897 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 11:33:14.412687  619897 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 11:33:14.462076  619897 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:48 SystemTime:2025-09-08 11:33:14.45326667 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 11:33:14.462299  619897 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 11:33:14.462601  619897 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 11:33:14.464297  619897 out.go:179] * Using Docker driver with root privileges
	I0908 11:33:14.465630  619897 cni.go:84] Creating CNI manager for ""
	I0908 11:33:14.465714  619897 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 11:33:14.465729  619897 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0908 11:33:14.465825  619897 start.go:348] cluster config:
	{Name:addons-960652 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-960652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0908 11:33:14.467367  619897 out.go:179] * Starting "addons-960652" primary control-plane node in "addons-960652" cluster
	I0908 11:33:14.468505  619897 cache.go:123] Beginning downloading kic base image for docker with crio
	I0908 11:33:14.469665  619897 out.go:179] * Pulling base image v0.0.47-1756980985-21488 ...
	I0908 11:33:14.470730  619897 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 11:33:14.470768  619897 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21512-614854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0908 11:33:14.470778  619897 cache.go:58] Caching tarball of preloaded images
	I0908 11:33:14.470835  619897 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 11:33:14.470894  619897 preload.go:172] Found /home/jenkins/minikube-integration/21512-614854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0908 11:33:14.470907  619897 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0908 11:33:14.471281  619897 profile.go:143] Saving config to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/config.json ...
	I0908 11:33:14.471312  619897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/config.json: {Name:mk63e696e8d863718ad39ec8567b26250dce130a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:33:14.488199  619897 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 to local cache
	I0908 11:33:14.488333  619897 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory
	I0908 11:33:14.488352  619897 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory, skipping pull
	I0908 11:33:14.488357  619897 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in cache, skipping pull
	I0908 11:33:14.488366  619897 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 as a tarball
	I0908 11:33:14.488373  619897 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 from local cache
	I0908 11:33:27.108889  619897 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 from cached tarball
	I0908 11:33:27.108933  619897 cache.go:232] Successfully downloaded all kic artifacts
	I0908 11:33:27.108976  619897 start.go:360] acquireMachinesLock for addons-960652: {Name:mk9214c1ac5ed01d58429ac05ff6466e746c07e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 11:33:27.109145  619897 start.go:364] duration metric: took 138.48µs to acquireMachinesLock for "addons-960652"
	I0908 11:33:27.109185  619897 start.go:93] Provisioning new machine with config: &{Name:addons-960652 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-960652 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 11:33:27.109276  619897 start.go:125] createHost starting for "" (driver="docker")
	I0908 11:33:27.111289  619897 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0908 11:33:27.111576  619897 start.go:159] libmachine.API.Create for "addons-960652" (driver="docker")
	I0908 11:33:27.111623  619897 client.go:168] LocalClient.Create starting
	I0908 11:33:27.111794  619897 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem
	I0908 11:33:27.211527  619897 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/cert.pem
	I0908 11:33:27.622080  619897 cli_runner.go:164] Run: docker network inspect addons-960652 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0908 11:33:27.639449  619897 cli_runner.go:211] docker network inspect addons-960652 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0908 11:33:27.639520  619897 network_create.go:284] running [docker network inspect addons-960652] to gather additional debugging logs...
	I0908 11:33:27.639546  619897 cli_runner.go:164] Run: docker network inspect addons-960652
	W0908 11:33:27.657247  619897 cli_runner.go:211] docker network inspect addons-960652 returned with exit code 1
	I0908 11:33:27.657281  619897 network_create.go:287] error running [docker network inspect addons-960652]: docker network inspect addons-960652: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-960652 not found
	I0908 11:33:27.657297  619897 network_create.go:289] output of [docker network inspect addons-960652]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-960652 not found
	
	** /stderr **
	I0908 11:33:27.657445  619897 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 11:33:27.675352  619897 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fac3d0}
	I0908 11:33:27.675404  619897 network_create.go:124] attempt to create docker network addons-960652 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0908 11:33:27.675451  619897 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-960652 addons-960652
	I0908 11:33:27.728660  619897 network_create.go:108] docker network addons-960652 192.168.49.0/24 created
	I0908 11:33:27.728693  619897 kic.go:121] calculated static IP "192.168.49.2" for the "addons-960652" container
	I0908 11:33:27.728762  619897 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0908 11:33:27.746243  619897 cli_runner.go:164] Run: docker volume create addons-960652 --label name.minikube.sigs.k8s.io=addons-960652 --label created_by.minikube.sigs.k8s.io=true
	I0908 11:33:27.764714  619897 oci.go:103] Successfully created a docker volume addons-960652
	I0908 11:33:27.764830  619897 cli_runner.go:164] Run: docker run --rm --name addons-960652-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-960652 --entrypoint /usr/bin/test -v addons-960652:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -d /var/lib
	I0908 11:33:34.695476  619897 cli_runner.go:217] Completed: docker run --rm --name addons-960652-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-960652 --entrypoint /usr/bin/test -v addons-960652:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -d /var/lib: (6.930597146s)
	I0908 11:33:34.695519  619897 oci.go:107] Successfully prepared a docker volume addons-960652
	I0908 11:33:34.695548  619897 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 11:33:34.695600  619897 kic.go:194] Starting extracting preloaded images to volume ...
	I0908 11:33:34.695685  619897 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21512-614854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-960652:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0908 11:33:39.251406  619897 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21512-614854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-960652:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -I lz4 -xf /preloaded.tar -C /extractDir: (4.555670977s)
	I0908 11:33:39.251440  619897 kic.go:203] duration metric: took 4.555836685s to extract preloaded images to volume ...
	W0908 11:33:39.251887  619897 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0908 11:33:39.252127  619897 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0908 11:33:39.303821  619897 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-960652 --name addons-960652 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-960652 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-960652 --network addons-960652 --ip 192.168.49.2 --volume addons-960652:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79
	I0908 11:33:39.592070  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Running}}
	I0908 11:33:39.612774  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	I0908 11:33:39.632684  619897 cli_runner.go:164] Run: docker exec addons-960652 stat /var/lib/dpkg/alternatives/iptables
	I0908 11:33:39.677281  619897 oci.go:144] the created container "addons-960652" has a running status.
	I0908 11:33:39.677318  619897 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa...
	I0908 11:33:40.245857  619897 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0908 11:33:40.267352  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	I0908 11:33:40.286678  619897 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0908 11:33:40.286705  619897 kic_runner.go:114] Args: [docker exec --privileged addons-960652 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0908 11:33:40.332091  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	I0908 11:33:40.350583  619897 machine.go:93] provisionDockerMachine start ...
	I0908 11:33:40.350710  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:33:40.369620  619897 main.go:141] libmachine: Using SSH client type: native
	I0908 11:33:40.369939  619897 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0908 11:33:40.369954  619897 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 11:33:40.491619  619897 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-960652
	
	I0908 11:33:40.491674  619897 ubuntu.go:182] provisioning hostname "addons-960652"
	I0908 11:33:40.491749  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:33:40.509332  619897 main.go:141] libmachine: Using SSH client type: native
	I0908 11:33:40.509560  619897 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0908 11:33:40.509576  619897 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-960652 && echo "addons-960652" | sudo tee /etc/hostname
	I0908 11:33:40.639767  619897 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-960652
	
	I0908 11:33:40.639848  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:33:40.659283  619897 main.go:141] libmachine: Using SSH client type: native
	I0908 11:33:40.659709  619897 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0908 11:33:40.659749  619897 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-960652' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-960652/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-960652' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 11:33:40.784357  619897 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 11:33:40.784397  619897 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21512-614854/.minikube CaCertPath:/home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21512-614854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21512-614854/.minikube}
	I0908 11:33:40.784448  619897 ubuntu.go:190] setting up certificates
	I0908 11:33:40.784468  619897 provision.go:84] configureAuth start
	I0908 11:33:40.784537  619897 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-960652
	I0908 11:33:40.802972  619897 provision.go:143] copyHostCerts
	I0908 11:33:40.803073  619897 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21512-614854/.minikube/ca.pem (1082 bytes)
	I0908 11:33:40.803228  619897 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21512-614854/.minikube/cert.pem (1123 bytes)
	I0908 11:33:40.803329  619897 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21512-614854/.minikube/key.pem (1675 bytes)
	I0908 11:33:40.803406  619897 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21512-614854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca-key.pem org=jenkins.addons-960652 san=[127.0.0.1 192.168.49.2 addons-960652 localhost minikube]
	I0908 11:33:41.038169  619897 provision.go:177] copyRemoteCerts
	I0908 11:33:41.038254  619897 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 11:33:41.038314  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:33:41.056946  619897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa Username:docker}
	I0908 11:33:41.149752  619897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 11:33:41.176267  619897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0908 11:33:41.202552  619897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0908 11:33:41.227486  619897 provision.go:87] duration metric: took 442.998521ms to configureAuth
	I0908 11:33:41.227518  619897 ubuntu.go:206] setting minikube options for container-runtime
	I0908 11:33:41.227740  619897 config.go:182] Loaded profile config "addons-960652": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 11:33:41.227864  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:33:41.246451  619897 main.go:141] libmachine: Using SSH client type: native
	I0908 11:33:41.246682  619897 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0908 11:33:41.246702  619897 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0908 11:33:41.464439  619897 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0908 11:33:41.464474  619897 machine.go:96] duration metric: took 1.113860455s to provisionDockerMachine
	I0908 11:33:41.464488  619897 client.go:171] duration metric: took 14.352856449s to LocalClient.Create
	I0908 11:33:41.464516  619897 start.go:167] duration metric: took 14.352939885s to libmachine.API.Create "addons-960652"
	I0908 11:33:41.464532  619897 start.go:293] postStartSetup for "addons-960652" (driver="docker")
	I0908 11:33:41.464552  619897 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 11:33:41.464646  619897 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 11:33:41.464720  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:33:41.483833  619897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa Username:docker}
	I0908 11:33:41.577389  619897 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 11:33:41.580702  619897 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0908 11:33:41.580728  619897 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0908 11:33:41.580735  619897 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0908 11:33:41.580742  619897 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0908 11:33:41.580753  619897 filesync.go:126] Scanning /home/jenkins/minikube-integration/21512-614854/.minikube/addons for local assets ...
	I0908 11:33:41.580822  619897 filesync.go:126] Scanning /home/jenkins/minikube-integration/21512-614854/.minikube/files for local assets ...
	I0908 11:33:41.580848  619897 start.go:296] duration metric: took 116.304547ms for postStartSetup
	I0908 11:33:41.581160  619897 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-960652
	I0908 11:33:41.599119  619897 profile.go:143] Saving config to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/config.json ...
	I0908 11:33:41.599384  619897 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 11:33:41.599425  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:33:41.616938  619897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa Username:docker}
	I0908 11:33:41.700946  619897 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0908 11:33:41.705474  619897 start.go:128] duration metric: took 14.596174719s to createHost
	I0908 11:33:41.705503  619897 start.go:83] releasing machines lock for "addons-960652", held for 14.596339054s
	I0908 11:33:41.705580  619897 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-960652
	I0908 11:33:41.723185  619897 ssh_runner.go:195] Run: cat /version.json
	I0908 11:33:41.723254  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:33:41.723300  619897 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 11:33:41.723385  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:33:41.741595  619897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa Username:docker}
	I0908 11:33:41.741883  619897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa Username:docker}
	I0908 11:33:41.899870  619897 ssh_runner.go:195] Run: systemctl --version
	I0908 11:33:41.904519  619897 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0908 11:33:42.046562  619897 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0908 11:33:42.053144  619897 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 11:33:42.072524  619897 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0908 11:33:42.072628  619897 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 11:33:42.101638  619897 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0908 11:33:42.101663  619897 start.go:495] detecting cgroup driver to use...
	I0908 11:33:42.101700  619897 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0908 11:33:42.101747  619897 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 11:33:42.118167  619897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 11:33:42.129770  619897 docker.go:218] disabling cri-docker service (if available) ...
	I0908 11:33:42.129825  619897 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 11:33:42.143544  619897 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 11:33:42.158241  619897 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 11:33:42.244536  619897 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 11:33:42.331904  619897 docker.go:234] disabling docker service ...
	I0908 11:33:42.331962  619897 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 11:33:42.352483  619897 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 11:33:42.364776  619897 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 11:33:42.444084  619897 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 11:33:42.532173  619897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 11:33:42.543862  619897 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 11:33:42.561072  619897 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0908 11:33:42.561140  619897 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:33:42.571592  619897 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0908 11:33:42.571689  619897 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:33:42.584397  619897 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:33:42.595120  619897 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:33:42.605544  619897 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 11:33:42.615152  619897 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:33:42.625278  619897 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:33:42.641872  619897 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:33:42.652113  619897 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 11:33:42.660725  619897 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 11:33:42.669680  619897 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:33:42.750148  619897 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0908 11:33:42.864131  619897 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0908 11:33:42.864225  619897 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0908 11:33:42.868259  619897 start.go:563] Will wait 60s for crictl version
	I0908 11:33:42.868333  619897 ssh_runner.go:195] Run: which crictl
	I0908 11:33:42.872381  619897 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 11:33:42.911298  619897 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0908 11:33:42.911418  619897 ssh_runner.go:195] Run: crio --version
	I0908 11:33:42.951463  619897 ssh_runner.go:195] Run: crio --version
	I0908 11:33:42.993113  619897 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0908 11:33:42.994407  619897 cli_runner.go:164] Run: docker network inspect addons-960652 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 11:33:43.013073  619897 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0908 11:33:43.017560  619897 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 11:33:43.030202  619897 kubeadm.go:875] updating cluster {Name:addons-960652 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-960652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 11:33:43.030319  619897 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 11:33:43.030364  619897 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 11:33:43.101136  619897 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 11:33:43.101160  619897 crio.go:433] Images already preloaded, skipping extraction
	I0908 11:33:43.101209  619897 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 11:33:43.137200  619897 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 11:33:43.137230  619897 cache_images.go:85] Images are preloaded, skipping loading
	I0908 11:33:43.137239  619897 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0908 11:33:43.137347  619897 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-960652 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-960652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 11:33:43.137413  619897 ssh_runner.go:195] Run: crio config
	I0908 11:33:43.182531  619897 cni.go:84] Creating CNI manager for ""
	I0908 11:33:43.182569  619897 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 11:33:43.182583  619897 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 11:33:43.182618  619897 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-960652 NodeName:addons-960652 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 11:33:43.182786  619897 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-960652"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 11:33:43.182864  619897 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 11:33:43.192022  619897 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 11:33:43.192099  619897 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 11:33:43.201596  619897 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0908 11:33:43.219879  619897 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 11:33:43.238069  619897 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I0908 11:33:43.256801  619897 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0908 11:33:43.260731  619897 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 11:33:43.272448  619897 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:33:43.348010  619897 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 11:33:43.362120  619897 certs.go:68] Setting up /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652 for IP: 192.168.49.2
	I0908 11:33:43.362147  619897 certs.go:194] generating shared ca certs ...
	I0908 11:33:43.362167  619897 certs.go:226] acquiring lock for ca certs: {Name:mkfa58237bde04d2b800b1fdade18ddbc226533f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:33:43.362309  619897 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21512-614854/.minikube/ca.key
	I0908 11:33:43.440168  619897 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21512-614854/.minikube/ca.crt ...
	I0908 11:33:43.440206  619897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/.minikube/ca.crt: {Name:mk7d80f7a404aff80aeaffcfc4edffccdfeb7dbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:33:43.440392  619897 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21512-614854/.minikube/ca.key ...
	I0908 11:33:43.440405  619897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/.minikube/ca.key: {Name:mkab08724aeb68516406bd46f7ec1f74215962cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:33:43.440487  619897 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21512-614854/.minikube/proxy-client-ca.key
	I0908 11:33:44.023691  619897 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21512-614854/.minikube/proxy-client-ca.crt ...
	I0908 11:33:44.023726  619897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/.minikube/proxy-client-ca.crt: {Name:mk03660a868fb9422d263878f84ec4cde0130a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:33:44.023904  619897 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21512-614854/.minikube/proxy-client-ca.key ...
	I0908 11:33:44.023915  619897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/.minikube/proxy-client-ca.key: {Name:mkf24004a21bb9937639c2d6fa8c74d200b76207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:33:44.023987  619897 certs.go:256] generating profile certs ...
	I0908 11:33:44.024051  619897 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/client.key
	I0908 11:33:44.024071  619897 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/client.crt with IP's: []
	I0908 11:33:44.154900  619897 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/client.crt ...
	I0908 11:33:44.154940  619897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/client.crt: {Name:mk67662acead9d252eb7928a0dc11c0c1f2c005f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:33:44.155124  619897 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/client.key ...
	I0908 11:33:44.155136  619897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/client.key: {Name:mk8d2739d1405a7f36688c270312154dc92c57bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:33:44.155208  619897 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/apiserver.key.a861e59b
	I0908 11:33:44.155227  619897 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/apiserver.crt.a861e59b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0908 11:33:44.298597  619897 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/apiserver.crt.a861e59b ...
	I0908 11:33:44.298638  619897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/apiserver.crt.a861e59b: {Name:mke9f4c212d3a4b584e6eb01f969fdf642fa3e0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:33:44.298810  619897 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/apiserver.key.a861e59b ...
	I0908 11:33:44.298832  619897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/apiserver.key.a861e59b: {Name:mkec19ac8aebaaff0a652a609af11dad1edf4727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:33:44.298898  619897 certs.go:381] copying /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/apiserver.crt.a861e59b -> /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/apiserver.crt
	I0908 11:33:44.298977  619897 certs.go:385] copying /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/apiserver.key.a861e59b -> /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/apiserver.key
	I0908 11:33:44.299024  619897 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/proxy-client.key
	I0908 11:33:44.299040  619897 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/proxy-client.crt with IP's: []
	I0908 11:33:44.855211  619897 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/proxy-client.crt ...
	I0908 11:33:44.855252  619897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/proxy-client.crt: {Name:mkc5aebb59908b46f397bbf30d93767d827141d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:33:44.855484  619897 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/proxy-client.key ...
	I0908 11:33:44.855504  619897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/proxy-client.key: {Name:mkc6d25fff608d09c2ed36e59950b7baef9b05b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:33:44.855754  619897 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca-key.pem (1675 bytes)
	I0908 11:33:44.855801  619897 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem (1082 bytes)
	I0908 11:33:44.855840  619897 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/cert.pem (1123 bytes)
	I0908 11:33:44.855872  619897 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/key.pem (1675 bytes)
	I0908 11:33:44.856480  619897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 11:33:44.883262  619897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0908 11:33:44.908769  619897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 11:33:44.932969  619897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0908 11:33:44.957563  619897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0908 11:33:44.982793  619897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0908 11:33:45.007259  619897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 11:33:45.031085  619897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0908 11:33:45.054914  619897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 11:33:45.078362  619897 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 11:33:45.096451  619897 ssh_runner.go:195] Run: openssl version
	I0908 11:33:45.101900  619897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 11:33:45.111291  619897 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:33:45.114741  619897 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 11:33 /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:33:45.114792  619897 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:33:45.121328  619897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 11:33:45.131152  619897 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 11:33:45.135058  619897 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0908 11:33:45.135114  619897 kubeadm.go:392] StartCluster: {Name:addons-960652 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-960652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 11:33:45.135195  619897 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0908 11:33:45.135249  619897 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 11:33:45.172742  619897 cri.go:89] found id: ""
	I0908 11:33:45.172817  619897 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 11:33:45.181786  619897 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0908 11:33:45.190780  619897 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0908 11:33:45.190843  619897 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0908 11:33:45.200921  619897 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0908 11:33:45.200943  619897 kubeadm.go:157] found existing configuration files:
	
	I0908 11:33:45.200997  619897 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0908 11:33:45.209848  619897 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0908 11:33:45.209907  619897 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0908 11:33:45.218481  619897 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0908 11:33:45.227090  619897 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0908 11:33:45.227154  619897 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0908 11:33:45.235619  619897 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0908 11:33:45.244180  619897 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0908 11:33:45.244247  619897 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0908 11:33:45.252892  619897 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0908 11:33:45.261549  619897 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0908 11:33:45.261601  619897 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0908 11:33:45.270096  619897 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0908 11:33:45.324755  619897 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0908 11:33:45.325032  619897 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
	I0908 11:33:45.379226  619897 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0908 11:33:56.410930  619897 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0908 11:33:56.411022  619897 kubeadm.go:310] [preflight] Running pre-flight checks
	I0908 11:33:56.411141  619897 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0908 11:33:56.411238  619897 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1083-gcp
	I0908 11:33:56.411297  619897 kubeadm.go:310] OS: Linux
	I0908 11:33:56.411365  619897 kubeadm.go:310] CGROUPS_CPU: enabled
	I0908 11:33:56.411450  619897 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0908 11:33:56.411524  619897 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0908 11:33:56.411601  619897 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0908 11:33:56.411697  619897 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0908 11:33:56.411772  619897 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0908 11:33:56.411839  619897 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0908 11:33:56.411909  619897 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0908 11:33:56.411988  619897 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0908 11:33:56.412097  619897 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0908 11:33:56.412263  619897 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0908 11:33:56.412399  619897 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0908 11:33:56.412496  619897 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0908 11:33:56.414508  619897 out.go:252]   - Generating certificates and keys ...
	I0908 11:33:56.414666  619897 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0908 11:33:56.414773  619897 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0908 11:33:56.414875  619897 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0908 11:33:56.414962  619897 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0908 11:33:56.415058  619897 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0908 11:33:56.415138  619897 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0908 11:33:56.415224  619897 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0908 11:33:56.415360  619897 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-960652 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0908 11:33:56.415431  619897 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0908 11:33:56.415538  619897 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-960652 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0908 11:33:56.415594  619897 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0908 11:33:56.415685  619897 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0908 11:33:56.415726  619897 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0908 11:33:56.415801  619897 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0908 11:33:56.415856  619897 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0908 11:33:56.415914  619897 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0908 11:33:56.415962  619897 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0908 11:33:56.416018  619897 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0908 11:33:56.416076  619897 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0908 11:33:56.416149  619897 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0908 11:33:56.416226  619897 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0908 11:33:56.417483  619897 out.go:252]   - Booting up control plane ...
	I0908 11:33:56.417581  619897 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0908 11:33:56.417657  619897 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0908 11:33:56.417736  619897 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0908 11:33:56.417862  619897 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0908 11:33:56.417986  619897 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0908 11:33:56.418092  619897 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0908 11:33:56.418170  619897 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0908 11:33:56.418245  619897 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0908 11:33:56.418375  619897 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0908 11:33:56.418471  619897 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0908 11:33:56.418539  619897 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.995188ms
	I0908 11:33:56.418669  619897 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0908 11:33:56.418785  619897 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0908 11:33:56.418910  619897 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0908 11:33:56.419023  619897 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0908 11:33:56.419087  619897 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 3.110331441s
	I0908 11:33:56.419153  619897 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.682121444s
	I0908 11:33:56.419207  619897 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 5.501384741s
	I0908 11:33:56.419341  619897 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0908 11:33:56.419443  619897 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0908 11:33:56.419602  619897 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0908 11:33:56.419871  619897 kubeadm.go:310] [mark-control-plane] Marking the node addons-960652 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0908 11:33:56.419952  619897 kubeadm.go:310] [bootstrap-token] Using token: 4fcisp.lymz102kws8rtzux
	I0908 11:33:56.421327  619897 out.go:252]   - Configuring RBAC rules ...
	I0908 11:33:56.421462  619897 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0908 11:33:56.421550  619897 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0908 11:33:56.421694  619897 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0908 11:33:56.421828  619897 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0908 11:33:56.421961  619897 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0908 11:33:56.422085  619897 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0908 11:33:56.422242  619897 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0908 11:33:56.422334  619897 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0908 11:33:56.422407  619897 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0908 11:33:56.422415  619897 kubeadm.go:310] 
	I0908 11:33:56.422462  619897 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0908 11:33:56.422467  619897 kubeadm.go:310] 
	I0908 11:33:56.422574  619897 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0908 11:33:56.422599  619897 kubeadm.go:310] 
	I0908 11:33:56.422645  619897 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0908 11:33:56.422706  619897 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0908 11:33:56.422749  619897 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0908 11:33:56.422755  619897 kubeadm.go:310] 
	I0908 11:33:56.422796  619897 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0908 11:33:56.422810  619897 kubeadm.go:310] 
	I0908 11:33:56.422848  619897 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0908 11:33:56.422854  619897 kubeadm.go:310] 
	I0908 11:33:56.422894  619897 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0908 11:33:56.422955  619897 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0908 11:33:56.423014  619897 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0908 11:33:56.423041  619897 kubeadm.go:310] 
	I0908 11:33:56.423155  619897 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0908 11:33:56.423267  619897 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0908 11:33:56.423280  619897 kubeadm.go:310] 
	I0908 11:33:56.423390  619897 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4fcisp.lymz102kws8rtzux \
	I0908 11:33:56.423542  619897 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7637fdf591f5caebb87eda0c466f59e2329c3b4b8f568d0c2721bf3a0b62aaf9 \
	I0908 11:33:56.423576  619897 kubeadm.go:310] 	--control-plane 
	I0908 11:33:56.423590  619897 kubeadm.go:310] 
	I0908 11:33:56.423724  619897 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0908 11:33:56.423733  619897 kubeadm.go:310] 
	I0908 11:33:56.423839  619897 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4fcisp.lymz102kws8rtzux \
	I0908 11:33:56.423988  619897 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7637fdf591f5caebb87eda0c466f59e2329c3b4b8f568d0c2721bf3a0b62aaf9 
	I0908 11:33:56.424015  619897 cni.go:84] Creating CNI manager for ""
	I0908 11:33:56.424025  619897 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 11:33:56.425557  619897 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0908 11:33:56.426820  619897 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0908 11:33:56.431301  619897 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0908 11:33:56.431326  619897 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0908 11:33:56.450462  619897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0908 11:33:56.670998  619897 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0908 11:33:56.671086  619897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 11:33:56.671123  619897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-960652 minikube.k8s.io/updated_at=2025_09_08T11_33_56_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=a399eb27affc71ce2737faeeac659fc2ce938c64 minikube.k8s.io/name=addons-960652 minikube.k8s.io/primary=true
	I0908 11:33:56.678870  619897 ops.go:34] apiserver oom_adj: -16
	I0908 11:33:56.885029  619897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 11:33:57.385360  619897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 11:33:57.885679  619897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 11:33:58.385872  619897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 11:33:58.885540  619897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 11:33:59.385878  619897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 11:33:59.885163  619897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 11:34:00.385274  619897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 11:34:00.885800  619897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 11:34:00.953407  619897 kubeadm.go:1105] duration metric: took 4.282393594s to wait for elevateKubeSystemPrivileges
	I0908 11:34:00.953442  619897 kubeadm.go:394] duration metric: took 15.818333544s to StartCluster
	I0908 11:34:00.953468  619897 settings.go:142] acquiring lock: {Name:mk9e6c1dbd6be0735fc1b1285e1ee30836d4ceb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:34:00.953609  619897 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21512-614854/kubeconfig
	I0908 11:34:00.954090  619897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/kubeconfig: {Name:mkdc8e576f612d76ffebaac15a9174cc9dea2917 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:34:00.954320  619897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0908 11:34:00.954343  619897 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 11:34:00.954425  619897 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0908 11:34:00.954539  619897 config.go:182] Loaded profile config "addons-960652": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 11:34:00.954569  619897 addons.go:69] Setting yakd=true in profile "addons-960652"
	I0908 11:34:00.954583  619897 addons.go:69] Setting cloud-spanner=true in profile "addons-960652"
	I0908 11:34:00.954592  619897 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-960652"
	I0908 11:34:00.954604  619897 addons.go:69] Setting registry=true in profile "addons-960652"
	I0908 11:34:00.954614  619897 addons.go:69] Setting default-storageclass=true in profile "addons-960652"
	I0908 11:34:00.954617  619897 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-960652"
	I0908 11:34:00.954622  619897 addons.go:238] Setting addon registry=true in "addons-960652"
	I0908 11:34:00.954617  619897 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-960652"
	I0908 11:34:00.954640  619897 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-960652"
	I0908 11:34:00.954665  619897 addons.go:69] Setting ingress-dns=true in profile "addons-960652"
	I0908 11:34:00.954678  619897 addons.go:69] Setting volumesnapshots=true in profile "addons-960652"
	I0908 11:34:00.954680  619897 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-960652"
	I0908 11:34:00.954690  619897 addons.go:69] Setting registry-creds=true in profile "addons-960652"
	I0908 11:34:00.954700  619897 addons.go:69] Setting metrics-server=true in profile "addons-960652"
	I0908 11:34:00.954701  619897 addons.go:69] Setting storage-provisioner=true in profile "addons-960652"
	I0908 11:34:00.954705  619897 addons.go:238] Setting addon registry-creds=true in "addons-960652"
	I0908 11:34:00.954712  619897 addons.go:238] Setting addon storage-provisioner=true in "addons-960652"
	I0908 11:34:00.954628  619897 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-960652"
	I0908 11:34:00.954731  619897 host.go:66] Checking if "addons-960652" exists ...
	I0908 11:34:00.954737  619897 host.go:66] Checking if "addons-960652" exists ...
	I0908 11:34:00.954606  619897 addons.go:69] Setting ingress=true in profile "addons-960652"
	I0908 11:34:00.954713  619897 addons.go:238] Setting addon metrics-server=true in "addons-960652"
	I0908 11:34:00.954756  619897 addons.go:238] Setting addon ingress=true in "addons-960652"
	I0908 11:34:00.954764  619897 host.go:66] Checking if "addons-960652" exists ...
	I0908 11:34:00.954787  619897 host.go:66] Checking if "addons-960652" exists ...
	I0908 11:34:00.954691  619897 addons.go:69] Setting inspektor-gadget=true in profile "addons-960652"
	I0908 11:34:00.955110  619897 addons.go:238] Setting addon inspektor-gadget=true in "addons-960652"
	I0908 11:34:00.955164  619897 host.go:66] Checking if "addons-960652" exists ...
	I0908 11:34:00.954664  619897 addons.go:69] Setting volcano=true in profile "addons-960652"
	I0908 11:34:00.955281  619897 addons.go:238] Setting addon volcano=true in "addons-960652"
	I0908 11:34:00.955300  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	I0908 11:34:00.955337  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	I0908 11:34:00.955366  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	I0908 11:34:00.954596  619897 addons.go:238] Setting addon yakd=true in "addons-960652"
	I0908 11:34:00.955417  619897 host.go:66] Checking if "addons-960652" exists ...
	I0908 11:34:00.955700  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	I0908 11:34:00.955817  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	I0908 11:34:00.955860  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	I0908 11:34:00.955311  619897 host.go:66] Checking if "addons-960652" exists ...
	I0908 11:34:00.956648  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	I0908 11:34:00.955166  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	I0908 11:34:00.954654  619897 host.go:66] Checking if "addons-960652" exists ...
	I0908 11:34:00.954678  619897 addons.go:238] Setting addon ingress-dns=true in "addons-960652"
	I0908 11:34:00.957896  619897 host.go:66] Checking if "addons-960652" exists ...
	I0908 11:34:00.954569  619897 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-960652"
	I0908 11:34:00.958069  619897 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-960652"
	I0908 11:34:00.958102  619897 host.go:66] Checking if "addons-960652" exists ...
	I0908 11:34:00.958396  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	I0908 11:34:00.958588  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	I0908 11:34:00.954596  619897 addons.go:238] Setting addon cloud-spanner=true in "addons-960652"
	I0908 11:34:00.958764  619897 host.go:66] Checking if "addons-960652" exists ...
	I0908 11:34:00.954636  619897 addons.go:69] Setting gcp-auth=true in profile "addons-960652"
	I0908 11:34:00.960051  619897 mustload.go:65] Loading cluster: addons-960652
	I0908 11:34:00.954655  619897 host.go:66] Checking if "addons-960652" exists ...
	I0908 11:34:00.954677  619897 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-960652"
	I0908 11:34:00.964673  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	I0908 11:34:00.954692  619897 addons.go:238] Setting addon volumesnapshots=true in "addons-960652"
	I0908 11:34:00.968383  619897 host.go:66] Checking if "addons-960652" exists ...
	I0908 11:34:00.954741  619897 host.go:66] Checking if "addons-960652" exists ...
	I0908 11:34:00.969613  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	I0908 11:34:00.969658  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	I0908 11:34:00.957520  619897 out.go:179] * Verifying Kubernetes components...
	I0908 11:34:00.973609  619897 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:34:00.991981  619897 config.go:182] Loaded profile config "addons-960652": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 11:34:00.992110  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	W0908 11:34:00.992223  619897 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0908 11:34:00.992276  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	I0908 11:34:00.992286  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	I0908 11:34:00.995889  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	I0908 11:34:01.001216  619897 addons.go:238] Setting addon default-storageclass=true in "addons-960652"
	I0908 11:34:01.001272  619897 host.go:66] Checking if "addons-960652" exists ...
	I0908 11:34:01.001702  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	I0908 11:34:01.004620  619897 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.0
	I0908 11:34:01.007043  619897 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0908 11:34:01.007215  619897 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0908 11:34:01.007232  619897 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0908 11:34:01.007316  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:34:01.008153  619897 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0908 11:34:01.008174  619897 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0908 11:34:01.008233  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:34:01.010295  619897 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0908 11:34:01.011469  619897 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 11:34:01.011504  619897 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0908 11:34:01.011523  619897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0908 11:34:01.011593  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:34:01.013040  619897 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 11:34:01.013063  619897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 11:34:01.013122  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:34:01.020732  619897 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0908 11:34:01.024740  619897 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0908 11:34:01.025935  619897 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0908 11:34:01.027387  619897 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0908 11:34:01.027412  619897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0908 11:34:01.027485  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:34:01.040551  619897 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0908 11:34:01.041439  619897 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-960652"
	I0908 11:34:01.041494  619897 host.go:66] Checking if "addons-960652" exists ...
	I0908 11:34:01.041564  619897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa Username:docker}
	I0908 11:34:01.041798  619897 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0908 11:34:01.041821  619897 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0908 11:34:01.041878  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:34:01.041937  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	I0908 11:34:01.055155  619897 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0908 11:34:01.058318  619897 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0908 11:34:01.058347  619897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0908 11:34:01.058424  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:34:01.060891  619897 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0908 11:34:01.061941  619897 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0908 11:34:01.063022  619897 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0908 11:34:01.063983  619897 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0908 11:34:01.065118  619897 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0908 11:34:01.066103  619897 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0908 11:34:01.067173  619897 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0908 11:34:01.068242  619897 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0908 11:34:01.070413  619897 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0908 11:34:01.070436  619897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0908 11:34:01.070505  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:34:01.070931  619897 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0908 11:34:01.071507  619897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa Username:docker}
	I0908 11:34:01.072445  619897 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0908 11:34:01.072475  619897 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0908 11:34:01.072555  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:34:01.087297  619897 host.go:66] Checking if "addons-960652" exists ...
	I0908 11:34:01.088477  619897 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0908 11:34:01.088581  619897 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0908 11:34:01.090812  619897 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0908 11:34:01.090838  619897 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0908 11:34:01.090910  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:34:01.091155  619897 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0908 11:34:01.093241  619897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa Username:docker}
	I0908 11:34:01.095785  619897 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0908 11:34:01.095815  619897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0908 11:34:01.095878  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:34:01.098375  619897 out.go:179]   - Using image docker.io/registry:3.0.0
	I0908 11:34:01.099557  619897 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0908 11:34:01.099586  619897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0908 11:34:01.099668  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:34:01.105077  619897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa Username:docker}
	I0908 11:34:01.112113  619897 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 11:34:01.112142  619897 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 11:34:01.112205  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:34:01.113635  619897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa Username:docker}
	I0908 11:34:01.117771  619897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa Username:docker}
	I0908 11:34:01.125950  619897 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0908 11:34:01.132793  619897 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0908 11:34:01.132823  619897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0908 11:34:01.132899  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:34:01.137490  619897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa Username:docker}
	I0908 11:34:01.139597  619897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa Username:docker}
	I0908 11:34:01.141421  619897 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0908 11:34:01.141453  619897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa Username:docker}
	I0908 11:34:01.145469  619897 out.go:179]   - Using image docker.io/busybox:stable
	I0908 11:34:01.145473  619897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa Username:docker}
	I0908 11:34:01.146880  619897 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0908 11:34:01.146903  619897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0908 11:34:01.146963  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:34:01.151091  619897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa Username:docker}
	I0908 11:34:01.154696  619897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa Username:docker}
	I0908 11:34:01.157280  619897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa Username:docker}
	I0908 11:34:01.163220  619897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa Username:docker}
	I0908 11:34:01.167060  619897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa Username:docker}
	W0908 11:34:01.182587  619897 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0908 11:34:01.182635  619897 retry.go:31] will retry after 169.961046ms: ssh: handshake failed: EOF
	W0908 11:34:01.182587  619897 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0908 11:34:01.182656  619897 retry.go:31] will retry after 211.29249ms: ssh: handshake failed: EOF
	I0908 11:34:01.289199  619897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0908 11:34:01.380080  619897 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 11:34:01.490045  619897 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0908 11:34:01.490152  619897 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0908 11:34:01.491928  619897 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0908 11:34:01.491966  619897 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0908 11:34:01.493756  619897 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0908 11:34:01.493784  619897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0908 11:34:01.578489  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 11:34:01.596098  619897 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0908 11:34:01.596202  619897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0908 11:34:01.680901  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0908 11:34:01.693003  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 11:34:01.776251  619897 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0908 11:34:01.776359  619897 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0908 11:34:01.777684  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0908 11:34:01.778223  619897 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0908 11:34:01.778278  619897 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0908 11:34:01.789500  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0908 11:34:01.877454  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 11:34:01.879375  619897 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0908 11:34:01.879454  619897 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0908 11:34:01.883571  619897 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0908 11:34:01.883663  619897 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0908 11:34:01.884319  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0908 11:34:01.894207  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0908 11:34:01.993144  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0908 11:34:02.077317  619897 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0908 11:34:02.077417  619897 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0908 11:34:02.084042  619897 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0908 11:34:02.084165  619897 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0908 11:34:02.091599  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0908 11:34:02.097190  619897 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0908 11:34:02.097287  619897 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0908 11:34:02.281278  619897 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0908 11:34:02.281394  619897 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0908 11:34:02.297371  619897 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0908 11:34:02.297487  619897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0908 11:34:02.377371  619897 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0908 11:34:02.377409  619897 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0908 11:34:02.494447  619897 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 11:34:02.494584  619897 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0908 11:34:02.577905  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0908 11:34:02.589092  619897 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0908 11:34:02.589199  619897 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0908 11:34:02.680976  619897 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0908 11:34:02.681084  619897 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0908 11:34:02.781293  619897 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0908 11:34:02.781403  619897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0908 11:34:02.790672  619897 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0908 11:34:02.790712  619897 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0908 11:34:03.378788  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 11:34:03.393269  619897 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0908 11:34:03.393308  619897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0908 11:34:03.398047  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0908 11:34:03.487000  619897 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0908 11:34:03.487118  619897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0908 11:34:03.687784  619897 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0908 11:34:03.687887  619897 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0908 11:34:03.776442  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0908 11:34:03.795307  619897 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.415060036s)
	I0908 11:34:03.796293  619897 node_ready.go:35] waiting up to 6m0s for node "addons-960652" to be "Ready" ...
	I0908 11:34:03.796640  619897 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.507390317s)
	I0908 11:34:03.796693  619897 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0908 11:34:04.376688  619897 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0908 11:34:04.376817  619897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0908 11:34:04.887162  619897 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0908 11:34:04.887278  619897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0908 11:34:05.101951  619897 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-960652" context rescaled to 1 replicas
	I0908 11:34:05.277675  619897 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0908 11:34:05.277778  619897 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0908 11:34:05.385881  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W0908 11:34:05.977850  619897 node_ready.go:57] node "addons-960652" has "Ready":"False" status (will retry)
	I0908 11:34:06.391275  619897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.710270254s)
	I0908 11:34:06.391543  619897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.813015231s)
	I0908 11:34:06.677695  619897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.984498727s)
	W0908 11:34:06.677790  619897 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:34:06.677820  619897 retry.go:31] will retry after 170.880749ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:34:06.849242  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 11:34:07.601838  619897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.812297836s)
	I0908 11:34:07.601935  619897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.724377166s)
	I0908 11:34:07.601958  619897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.717577763s)
	I0908 11:34:07.601979  619897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.707685157s)
	I0908 11:34:07.602024  619897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.608779058s)
	I0908 11:34:07.602051  619897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.510342542s)
	I0908 11:34:07.602082  619897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.024077908s)
	I0908 11:34:07.602859  619897 addons.go:479] Verifying addon registry=true in "addons-960652"
	I0908 11:34:07.602132  619897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.223238413s)
	I0908 11:34:07.603167  619897 addons.go:479] Verifying addon metrics-server=true in "addons-960652"
	I0908 11:34:07.602173  619897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.20409235s)
	I0908 11:34:07.603356  619897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.825596652s)
	I0908 11:34:07.603415  619897 addons.go:479] Verifying addon ingress=true in "addons-960652"
	I0908 11:34:07.604567  619897 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-960652 service yakd-dashboard -n yakd-dashboard
	
	I0908 11:34:07.604594  619897 out.go:179] * Verifying registry addon...
	I0908 11:34:07.605424  619897 out.go:179] * Verifying ingress addon...
	I0908 11:34:07.606784  619897 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0908 11:34:07.607292  619897 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0908 11:34:07.679628  619897 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0908 11:34:07.679741  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:07.679810  619897 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0908 11:34:07.679834  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 11:34:07.684362  619897 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0908 11:34:08.110720  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:08.110766  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 11:34:08.299834  619897 node_ready.go:57] node "addons-960652" has "Ready":"False" status (will retry)
	I0908 11:34:08.610717  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:08.610931  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:08.696797  619897 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0908 11:34:08.696992  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:34:08.720842  619897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa Username:docker}
	I0908 11:34:08.977855  619897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.201353915s)
	W0908 11:34:08.977907  619897 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0908 11:34:08.977932  619897 retry.go:31] will retry after 216.72198ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0908 11:34:08.978171  619897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.592166773s)
	I0908 11:34:08.978211  619897 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-960652"
	I0908 11:34:08.980027  619897 out.go:179] * Verifying csi-hostpath-driver addon...
	I0908 11:34:08.982448  619897 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0908 11:34:08.986052  619897 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0908 11:34:08.986078  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:09.002683  619897 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0908 11:34:09.085374  619897 addons.go:238] Setting addon gcp-auth=true in "addons-960652"
	I0908 11:34:09.085448  619897 host.go:66] Checking if "addons-960652" exists ...
	I0908 11:34:09.085840  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	I0908 11:34:09.106987  619897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.257700445s)
	W0908 11:34:09.107023  619897 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:34:09.107046  619897 retry.go:31] will retry after 329.343963ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:34:09.107885  619897 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0908 11:34:09.107943  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:34:09.111569  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:09.111698  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:09.126635  619897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa Username:docker}
	I0908 11:34:09.195230  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0908 11:34:09.436982  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 11:34:09.485960  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:09.611581  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:09.611858  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:09.986323  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:10.110577  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:10.110861  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 11:34:10.300065  619897 node_ready.go:57] node "addons-960652" has "Ready":"False" status (will retry)
	I0908 11:34:10.486318  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:10.610696  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:10.610928  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:10.986489  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:11.111223  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:11.111278  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:11.486808  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:11.610664  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:11.610884  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:11.730439  619897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.535158491s)
	I0908 11:34:11.730504  619897 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.622585723s)
	I0908 11:34:11.730548  619897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.293512363s)
	W0908 11:34:11.730573  619897 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:34:11.730591  619897 retry.go:31] will retry after 623.173809ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:34:11.732880  619897 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0908 11:34:11.734287  619897 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0908 11:34:11.735614  619897 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0908 11:34:11.735633  619897 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0908 11:34:11.754344  619897 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0908 11:34:11.754384  619897 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0908 11:34:11.771820  619897 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0908 11:34:11.771844  619897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0908 11:34:11.789886  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0908 11:34:11.986218  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:12.114087  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:12.114792  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:12.177449  619897 addons.go:479] Verifying addon gcp-auth=true in "addons-960652"
	I0908 11:34:12.179101  619897 out.go:179] * Verifying gcp-auth addon...
	I0908 11:34:12.181827  619897 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0908 11:34:12.184308  619897 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0908 11:34:12.184328  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 11:34:12.300201  619897 node_ready.go:57] node "addons-960652" has "Ready":"False" status (will retry)
	I0908 11:34:12.354464  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 11:34:12.486775  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:12.610932  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:12.611042  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:12.686063  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 11:34:12.925012  619897 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:34:12.925054  619897 retry.go:31] will retry after 909.363968ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:34:12.986352  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:13.110617  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:13.110672  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:13.185926  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:13.486312  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:13.611492  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:13.611524  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:13.685558  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:13.835593  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 11:34:13.986175  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:14.111927  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:14.111940  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:14.185623  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 11:34:14.300499  619897 node_ready.go:57] node "addons-960652" has "Ready":"False" status (will retry)
	W0908 11:34:14.401824  619897 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:34:14.401860  619897 retry.go:31] will retry after 1.294572327s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:34:14.486041  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:14.611005  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:14.611161  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:14.712056  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:14.986095  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:15.111204  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:15.111250  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:15.185311  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:15.486949  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:15.611340  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:15.611559  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:15.685430  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:15.697495  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 11:34:15.985523  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:16.110404  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:16.110605  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:16.185013  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 11:34:16.272316  619897 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:34:16.272376  619897 retry.go:31] will retry after 961.705756ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:34:16.486269  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:16.611337  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:16.611417  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:16.685404  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 11:34:16.799305  619897 node_ready.go:57] node "addons-960652" has "Ready":"False" status (will retry)
	I0908 11:34:16.987101  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:17.111483  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:17.111641  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:17.185931  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:17.235082  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 11:34:17.486476  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:17.611237  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:17.611298  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:17.685790  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 11:34:17.804873  619897 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:34:17.804910  619897 retry.go:31] will retry after 1.762445108s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:34:17.986287  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:18.111452  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:18.111709  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:18.185357  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:18.485627  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:18.610723  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:18.610836  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:18.686171  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 11:34:18.800435  619897 node_ready.go:57] node "addons-960652" has "Ready":"False" status (will retry)
	I0908 11:34:18.986064  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:19.111432  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:19.111530  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:19.185273  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:19.486062  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:19.568258  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 11:34:19.610773  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:19.610875  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:19.685741  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:19.986575  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:20.111569  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:20.111601  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0908 11:34:20.132104  619897 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:34:20.132145  619897 retry.go:31] will retry after 2.782976601s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:34:20.185407  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:20.486018  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:20.611345  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:20.611437  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:20.685641  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:20.986720  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:21.111429  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:21.111533  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:21.185116  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 11:34:21.300492  619897 node_ready.go:57] node "addons-960652" has "Ready":"False" status (will retry)
	I0908 11:34:21.486462  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:21.610480  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:21.610540  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:21.685439  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:21.986341  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:22.110520  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:22.110849  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:22.185765  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:22.485893  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:22.611062  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:22.611210  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:22.685239  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:22.915600  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 11:34:22.986821  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:23.111570  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:23.111637  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:23.186170  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:23.486236  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0908 11:34:23.491554  619897 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:34:23.491590  619897 retry.go:31] will retry after 6.078040333s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:34:23.610917  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:23.611023  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:23.686076  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 11:34:23.799831  619897 node_ready.go:57] node "addons-960652" has "Ready":"False" status (will retry)
	I0908 11:34:23.985764  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:24.110755  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:24.110959  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:24.184972  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:24.486878  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:24.610831  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:24.610972  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:24.685261  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:24.986798  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:25.110788  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:25.110893  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:25.184999  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:25.486267  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:25.610798  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:25.610983  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:25.686122  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 11:34:25.800114  619897 node_ready.go:57] node "addons-960652" has "Ready":"False" status (will retry)
	I0908 11:34:25.986443  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:26.110745  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:26.110881  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:26.184883  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:26.485853  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:26.611176  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:26.611223  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:26.686927  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:26.986367  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:27.110553  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:27.110737  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:27.185734  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:27.486180  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:27.611719  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:27.611937  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:27.686249  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 11:34:27.800422  619897 node_ready.go:57] node "addons-960652" has "Ready":"False" status (will retry)
	I0908 11:34:27.985724  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:28.111006  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:28.111081  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:28.184853  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:28.486071  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:28.611302  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:28.611529  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:28.685690  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:28.986581  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:29.110671  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:29.110792  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:29.185861  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:29.486568  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:29.570751  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 11:34:29.611366  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:29.611539  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:29.685331  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 11:34:29.800681  619897 node_ready.go:57] node "addons-960652" has "Ready":"False" status (will retry)
	I0908 11:34:29.985950  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:30.110758  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:30.110799  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 11:34:30.168721  619897 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:34:30.168760  619897 retry.go:31] will retry after 10.429694039s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:34:30.186034  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:30.485669  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:30.611259  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:30.611525  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:30.685512  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:30.986025  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:31.111433  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:31.111613  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:31.185521  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:31.485852  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:31.610770  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:31.610918  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:31.685832  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:31.985437  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:32.110485  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:32.110599  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:32.185442  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 11:34:32.299577  619897 node_ready.go:57] node "addons-960652" has "Ready":"False" status (will retry)
	I0908 11:34:32.485705  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:32.611362  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:32.611505  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:32.685537  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:32.985818  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:33.111161  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:33.111319  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:33.185156  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:33.486648  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:33.610639  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:33.610824  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:33.686061  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:33.986379  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:34.110642  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:34.110692  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:34.185822  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 11:34:34.299930  619897 node_ready.go:57] node "addons-960652" has "Ready":"False" status (will retry)
	I0908 11:34:34.486405  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:34.610296  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:34.610343  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:34.685117  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:34.986807  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:35.111337  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:35.111351  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:35.185697  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:35.486452  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:35.610757  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:35.610886  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:35.686065  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:35.986070  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:36.111162  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:36.111173  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:36.185032  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 11:34:36.300054  619897 node_ready.go:57] node "addons-960652" has "Ready":"False" status (will retry)
	I0908 11:34:36.486110  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:36.611401  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:36.611484  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:36.685433  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:36.986659  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:37.111310  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:37.111319  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:37.185305  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:37.485534  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:37.611205  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:37.611406  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:37.685513  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:37.986746  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:38.111238  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:38.111328  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:38.185484  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:38.486481  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:38.610528  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:38.610582  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:38.685777  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 11:34:38.799783  619897 node_ready.go:57] node "addons-960652" has "Ready":"False" status (will retry)
	I0908 11:34:38.985814  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:39.111099  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:39.111211  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:39.184960  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:39.485895  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:39.610986  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:39.611241  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:39.684983  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:39.986703  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:40.110931  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:40.111224  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:40.185288  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:40.486673  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:40.598858  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 11:34:40.611418  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:40.611538  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:40.685256  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 11:34:40.800265  619897 node_ready.go:57] node "addons-960652" has "Ready":"False" status (will retry)
	I0908 11:34:40.986439  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:41.110739  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:41.110843  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 11:34:41.169150  619897 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:34:41.169186  619897 retry.go:31] will retry after 21.354000525s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:34:41.185419  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:41.485902  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:41.610981  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:41.611151  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:41.685151  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:41.986966  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:42.110695  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:42.110881  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:42.185061  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:42.486711  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:42.611199  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:42.611296  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:42.685246  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 11:34:42.800641  619897 node_ready.go:57] node "addons-960652" has "Ready":"False" status (will retry)
	I0908 11:34:42.985705  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:43.110875  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:43.110975  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:43.185671  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:43.486188  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:43.611223  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:43.611362  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:43.685289  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:43.985551  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:44.111036  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:44.111092  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:44.185069  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:44.486193  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:44.611179  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:44.611342  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:44.685580  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:44.986367  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:45.111345  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:45.111423  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:45.185125  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 11:34:45.300099  619897 node_ready.go:57] node "addons-960652" has "Ready":"False" status (will retry)
	I0908 11:34:45.491225  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:45.680531  619897 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0908 11:34:45.680557  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:45.682328  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:45.689241  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:45.805898  619897 node_ready.go:49] node "addons-960652" is "Ready"
	I0908 11:34:45.805944  619897 node_ready.go:38] duration metric: took 42.009613325s for node "addons-960652" to be "Ready" ...
	I0908 11:34:45.805965  619897 api_server.go:52] waiting for apiserver process to appear ...
	I0908 11:34:45.806033  619897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:34:45.892893  619897 api_server.go:72] duration metric: took 44.938504251s to wait for apiserver process to appear ...
	I0908 11:34:45.892927  619897 api_server.go:88] waiting for apiserver healthz status ...
	I0908 11:34:45.892957  619897 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0908 11:34:45.900123  619897 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0908 11:34:45.901502  619897 api_server.go:141] control plane version: v1.34.0
	I0908 11:34:45.901602  619897 api_server.go:131] duration metric: took 8.660932ms to wait for apiserver health ...
	I0908 11:34:45.901622  619897 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 11:34:45.981045  619897 system_pods.go:59] 20 kube-system pods found
	I0908 11:34:45.981171  619897 system_pods.go:61] "amd-gpu-device-plugin-z9mjk" [056bccc6-d6e9-439f-b5d5-7d4e79ed80ff] Pending
	I0908 11:34:45.981205  619897 system_pods.go:61] "coredns-66bc5c9577-np8sm" [dd077421-9a2f-4cf6-85b5-91e95acb5c29] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 11:34:45.981253  619897 system_pods.go:61] "csi-hostpath-attacher-0" [d0304260-a998-4367-bde0-ee3b3eeb6aeb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 11:34:45.981274  619897 system_pods.go:61] "csi-hostpath-resizer-0" [9c73ee63-d50a-4c38-9a2a-b8a18f03e052] Pending
	I0908 11:34:45.981291  619897 system_pods.go:61] "csi-hostpathplugin-x742h" [d29c6c4d-0ce4-4f2c-b140-84580bf95d82] Pending
	I0908 11:34:45.981307  619897 system_pods.go:61] "etcd-addons-960652" [68fb7a0f-5079-4c33-b5ff-babac14b19c1] Running
	I0908 11:34:45.981336  619897 system_pods.go:61] "kindnet-hcvll" [24187a08-ba32-41f3-8a6d-405f4ba5a615] Running
	I0908 11:34:45.981360  619897 system_pods.go:61] "kube-apiserver-addons-960652" [fab1fbbc-576e-4ad1-8ca9-cf5e6ea7ff14] Running
	I0908 11:34:45.981377  619897 system_pods.go:61] "kube-controller-manager-addons-960652" [5b8a7e15-9985-4560-9da8-e78346901b01] Running
	I0908 11:34:45.981394  619897 system_pods.go:61] "kube-ingress-dns-minikube" [812dce79-f0f6-4630-b5db-da829812269d] Pending
	I0908 11:34:45.981409  619897 system_pods.go:61] "kube-proxy-gz2w6" [0fc0f78a-c81b-49f8-8f9d-647638c4a0b3] Running
	I0908 11:34:45.981437  619897 system_pods.go:61] "kube-scheduler-addons-960652" [959edcf3-2df7-4a58-b606-8d0876bcdc63] Running
	I0908 11:34:45.981464  619897 system_pods.go:61] "metrics-server-85b7d694d7-mc5nf" [564706d3-c633-40a6-9289-585f1e5a5c7d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 11:34:45.981480  619897 system_pods.go:61] "nvidia-device-plugin-daemonset-hckn5" [8a17fe7a-39e5-454c-bb05-d21c5cd2db0e] Pending
	I0908 11:34:45.981494  619897 system_pods.go:61] "registry-66898fdd98-zbdzw" [eb08f47e-6098-4caa-a39d-4250b569612a] Pending
	I0908 11:34:45.981508  619897 system_pods.go:61] "registry-creds-764b6fb674-7tdwk" [86622412-d7b1-4c72-9645-325fcf7990fe] Pending
	I0908 11:34:45.981534  619897 system_pods.go:61] "registry-proxy-x79dt" [4e5f709b-3754-4bf9-a2ab-51c1f737b115] Pending
	I0908 11:34:45.981562  619897 system_pods.go:61] "snapshot-controller-7d9fbc56b8-857s7" [a9e55fef-8cf4-4f53-848b-015009283c41] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 11:34:45.981580  619897 system_pods.go:61] "snapshot-controller-7d9fbc56b8-8gd9h" [c3cd2f62-33df-4910-98f4-c986bbfb834d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 11:34:45.981594  619897 system_pods.go:61] "storage-provisioner" [8b8290c0-5abd-4d01-b1ff-2c6c84e01327] Pending
	I0908 11:34:45.981637  619897 system_pods.go:74] duration metric: took 80.006948ms to wait for pod list to return data ...
	I0908 11:34:45.981667  619897 default_sa.go:34] waiting for default service account to be created ...
	I0908 11:34:45.985381  619897 default_sa.go:45] found service account: "default"
	I0908 11:34:45.985411  619897 default_sa.go:55] duration metric: took 3.727336ms for default service account to be created ...
	I0908 11:34:45.985424  619897 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 11:34:45.992021  619897 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0908 11:34:45.992062  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:45.995245  619897 system_pods.go:86] 20 kube-system pods found
	I0908 11:34:45.995285  619897 system_pods.go:89] "amd-gpu-device-plugin-z9mjk" [056bccc6-d6e9-439f-b5d5-7d4e79ed80ff] Pending
	I0908 11:34:45.995301  619897 system_pods.go:89] "coredns-66bc5c9577-np8sm" [dd077421-9a2f-4cf6-85b5-91e95acb5c29] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 11:34:45.995310  619897 system_pods.go:89] "csi-hostpath-attacher-0" [d0304260-a998-4367-bde0-ee3b3eeb6aeb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 11:34:45.995319  619897 system_pods.go:89] "csi-hostpath-resizer-0" [9c73ee63-d50a-4c38-9a2a-b8a18f03e052] Pending
	I0908 11:34:45.995325  619897 system_pods.go:89] "csi-hostpathplugin-x742h" [d29c6c4d-0ce4-4f2c-b140-84580bf95d82] Pending
	I0908 11:34:45.995330  619897 system_pods.go:89] "etcd-addons-960652" [68fb7a0f-5079-4c33-b5ff-babac14b19c1] Running
	I0908 11:34:45.995337  619897 system_pods.go:89] "kindnet-hcvll" [24187a08-ba32-41f3-8a6d-405f4ba5a615] Running
	I0908 11:34:45.995350  619897 system_pods.go:89] "kube-apiserver-addons-960652" [fab1fbbc-576e-4ad1-8ca9-cf5e6ea7ff14] Running
	I0908 11:34:45.995356  619897 system_pods.go:89] "kube-controller-manager-addons-960652" [5b8a7e15-9985-4560-9da8-e78346901b01] Running
	I0908 11:34:45.995362  619897 system_pods.go:89] "kube-ingress-dns-minikube" [812dce79-f0f6-4630-b5db-da829812269d] Pending
	I0908 11:34:45.995376  619897 system_pods.go:89] "kube-proxy-gz2w6" [0fc0f78a-c81b-49f8-8f9d-647638c4a0b3] Running
	I0908 11:34:45.995382  619897 system_pods.go:89] "kube-scheduler-addons-960652" [959edcf3-2df7-4a58-b606-8d0876bcdc63] Running
	I0908 11:34:45.995397  619897 system_pods.go:89] "metrics-server-85b7d694d7-mc5nf" [564706d3-c633-40a6-9289-585f1e5a5c7d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 11:34:45.995404  619897 system_pods.go:89] "nvidia-device-plugin-daemonset-hckn5" [8a17fe7a-39e5-454c-bb05-d21c5cd2db0e] Pending
	I0908 11:34:45.995411  619897 system_pods.go:89] "registry-66898fdd98-zbdzw" [eb08f47e-6098-4caa-a39d-4250b569612a] Pending
	I0908 11:34:45.995415  619897 system_pods.go:89] "registry-creds-764b6fb674-7tdwk" [86622412-d7b1-4c72-9645-325fcf7990fe] Pending
	I0908 11:34:45.995420  619897 system_pods.go:89] "registry-proxy-x79dt" [4e5f709b-3754-4bf9-a2ab-51c1f737b115] Pending
	I0908 11:34:45.995428  619897 system_pods.go:89] "snapshot-controller-7d9fbc56b8-857s7" [a9e55fef-8cf4-4f53-848b-015009283c41] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 11:34:45.995443  619897 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8gd9h" [c3cd2f62-33df-4910-98f4-c986bbfb834d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 11:34:45.995450  619897 system_pods.go:89] "storage-provisioner" [8b8290c0-5abd-4d01-b1ff-2c6c84e01327] Pending
	I0908 11:34:45.995478  619897 retry.go:31] will retry after 284.87762ms: missing components: kube-dns
	I0908 11:34:46.110909  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:46.112086  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:46.187379  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:46.286844  619897 system_pods.go:86] 20 kube-system pods found
	I0908 11:34:46.286891  619897 system_pods.go:89] "amd-gpu-device-plugin-z9mjk" [056bccc6-d6e9-439f-b5d5-7d4e79ed80ff] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0908 11:34:46.286902  619897 system_pods.go:89] "coredns-66bc5c9577-np8sm" [dd077421-9a2f-4cf6-85b5-91e95acb5c29] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 11:34:46.286912  619897 system_pods.go:89] "csi-hostpath-attacher-0" [d0304260-a998-4367-bde0-ee3b3eeb6aeb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 11:34:46.286920  619897 system_pods.go:89] "csi-hostpath-resizer-0" [9c73ee63-d50a-4c38-9a2a-b8a18f03e052] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 11:34:46.286927  619897 system_pods.go:89] "csi-hostpathplugin-x742h" [d29c6c4d-0ce4-4f2c-b140-84580bf95d82] Pending
	I0908 11:34:46.286933  619897 system_pods.go:89] "etcd-addons-960652" [68fb7a0f-5079-4c33-b5ff-babac14b19c1] Running
	I0908 11:34:46.286938  619897 system_pods.go:89] "kindnet-hcvll" [24187a08-ba32-41f3-8a6d-405f4ba5a615] Running
	I0908 11:34:46.286944  619897 system_pods.go:89] "kube-apiserver-addons-960652" [fab1fbbc-576e-4ad1-8ca9-cf5e6ea7ff14] Running
	I0908 11:34:46.286957  619897 system_pods.go:89] "kube-controller-manager-addons-960652" [5b8a7e15-9985-4560-9da8-e78346901b01] Running
	I0908 11:34:46.286964  619897 system_pods.go:89] "kube-ingress-dns-minikube" [812dce79-f0f6-4630-b5db-da829812269d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 11:34:46.286971  619897 system_pods.go:89] "kube-proxy-gz2w6" [0fc0f78a-c81b-49f8-8f9d-647638c4a0b3] Running
	I0908 11:34:46.286977  619897 system_pods.go:89] "kube-scheduler-addons-960652" [959edcf3-2df7-4a58-b606-8d0876bcdc63] Running
	I0908 11:34:46.286984  619897 system_pods.go:89] "metrics-server-85b7d694d7-mc5nf" [564706d3-c633-40a6-9289-585f1e5a5c7d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 11:34:46.286990  619897 system_pods.go:89] "nvidia-device-plugin-daemonset-hckn5" [8a17fe7a-39e5-454c-bb05-d21c5cd2db0e] Pending
	I0908 11:34:46.286998  619897 system_pods.go:89] "registry-66898fdd98-zbdzw" [eb08f47e-6098-4caa-a39d-4250b569612a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 11:34:46.287008  619897 system_pods.go:89] "registry-creds-764b6fb674-7tdwk" [86622412-d7b1-4c72-9645-325fcf7990fe] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0908 11:34:46.287016  619897 system_pods.go:89] "registry-proxy-x79dt" [4e5f709b-3754-4bf9-a2ab-51c1f737b115] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0908 11:34:46.287088  619897 system_pods.go:89] "snapshot-controller-7d9fbc56b8-857s7" [a9e55fef-8cf4-4f53-848b-015009283c41] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 11:34:46.287098  619897 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8gd9h" [c3cd2f62-33df-4910-98f4-c986bbfb834d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 11:34:46.287112  619897 system_pods.go:89] "storage-provisioner" [8b8290c0-5abd-4d01-b1ff-2c6c84e01327] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 11:34:46.287137  619897 retry.go:31] will retry after 379.968765ms: missing components: kube-dns
	I0908 11:34:46.486677  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:46.685613  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:46.686389  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:46.687210  619897 system_pods.go:86] 20 kube-system pods found
	I0908 11:34:46.687242  619897 system_pods.go:89] "amd-gpu-device-plugin-z9mjk" [056bccc6-d6e9-439f-b5d5-7d4e79ed80ff] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0908 11:34:46.687269  619897 system_pods.go:89] "coredns-66bc5c9577-np8sm" [dd077421-9a2f-4cf6-85b5-91e95acb5c29] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 11:34:46.687285  619897 system_pods.go:89] "csi-hostpath-attacher-0" [d0304260-a998-4367-bde0-ee3b3eeb6aeb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 11:34:46.687298  619897 system_pods.go:89] "csi-hostpath-resizer-0" [9c73ee63-d50a-4c38-9a2a-b8a18f03e052] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 11:34:46.687312  619897 system_pods.go:89] "csi-hostpathplugin-x742h" [d29c6c4d-0ce4-4f2c-b140-84580bf95d82] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0908 11:34:46.687320  619897 system_pods.go:89] "etcd-addons-960652" [68fb7a0f-5079-4c33-b5ff-babac14b19c1] Running
	I0908 11:34:46.687333  619897 system_pods.go:89] "kindnet-hcvll" [24187a08-ba32-41f3-8a6d-405f4ba5a615] Running
	I0908 11:34:46.687341  619897 system_pods.go:89] "kube-apiserver-addons-960652" [fab1fbbc-576e-4ad1-8ca9-cf5e6ea7ff14] Running
	I0908 11:34:46.687347  619897 system_pods.go:89] "kube-controller-manager-addons-960652" [5b8a7e15-9985-4560-9da8-e78346901b01] Running
	I0908 11:34:46.687358  619897 system_pods.go:89] "kube-ingress-dns-minikube" [812dce79-f0f6-4630-b5db-da829812269d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 11:34:46.687368  619897 system_pods.go:89] "kube-proxy-gz2w6" [0fc0f78a-c81b-49f8-8f9d-647638c4a0b3] Running
	I0908 11:34:46.687374  619897 system_pods.go:89] "kube-scheduler-addons-960652" [959edcf3-2df7-4a58-b606-8d0876bcdc63] Running
	I0908 11:34:46.687385  619897 system_pods.go:89] "metrics-server-85b7d694d7-mc5nf" [564706d3-c633-40a6-9289-585f1e5a5c7d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 11:34:46.687395  619897 system_pods.go:89] "nvidia-device-plugin-daemonset-hckn5" [8a17fe7a-39e5-454c-bb05-d21c5cd2db0e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0908 11:34:46.687406  619897 system_pods.go:89] "registry-66898fdd98-zbdzw" [eb08f47e-6098-4caa-a39d-4250b569612a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 11:34:46.687418  619897 system_pods.go:89] "registry-creds-764b6fb674-7tdwk" [86622412-d7b1-4c72-9645-325fcf7990fe] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0908 11:34:46.687433  619897 system_pods.go:89] "registry-proxy-x79dt" [4e5f709b-3754-4bf9-a2ab-51c1f737b115] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0908 11:34:46.687441  619897 system_pods.go:89] "snapshot-controller-7d9fbc56b8-857s7" [a9e55fef-8cf4-4f53-848b-015009283c41] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 11:34:46.687452  619897 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8gd9h" [c3cd2f62-33df-4910-98f4-c986bbfb834d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 11:34:46.687466  619897 system_pods.go:89] "storage-provisioner" [8b8290c0-5abd-4d01-b1ff-2c6c84e01327] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 11:34:46.687487  619897 retry.go:31] will retry after 345.410441ms: missing components: kube-dns
	I0908 11:34:46.780566  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:46.985856  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:47.042361  619897 system_pods.go:86] 20 kube-system pods found
	I0908 11:34:47.042397  619897 system_pods.go:89] "amd-gpu-device-plugin-z9mjk" [056bccc6-d6e9-439f-b5d5-7d4e79ed80ff] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0908 11:34:47.042403  619897 system_pods.go:89] "coredns-66bc5c9577-np8sm" [dd077421-9a2f-4cf6-85b5-91e95acb5c29] Running
	I0908 11:34:47.042410  619897 system_pods.go:89] "csi-hostpath-attacher-0" [d0304260-a998-4367-bde0-ee3b3eeb6aeb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 11:34:47.042417  619897 system_pods.go:89] "csi-hostpath-resizer-0" [9c73ee63-d50a-4c38-9a2a-b8a18f03e052] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 11:34:47.042423  619897 system_pods.go:89] "csi-hostpathplugin-x742h" [d29c6c4d-0ce4-4f2c-b140-84580bf95d82] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0908 11:34:47.042427  619897 system_pods.go:89] "etcd-addons-960652" [68fb7a0f-5079-4c33-b5ff-babac14b19c1] Running
	I0908 11:34:47.042430  619897 system_pods.go:89] "kindnet-hcvll" [24187a08-ba32-41f3-8a6d-405f4ba5a615] Running
	I0908 11:34:47.042433  619897 system_pods.go:89] "kube-apiserver-addons-960652" [fab1fbbc-576e-4ad1-8ca9-cf5e6ea7ff14] Running
	I0908 11:34:47.042437  619897 system_pods.go:89] "kube-controller-manager-addons-960652" [5b8a7e15-9985-4560-9da8-e78346901b01] Running
	I0908 11:34:47.042443  619897 system_pods.go:89] "kube-ingress-dns-minikube" [812dce79-f0f6-4630-b5db-da829812269d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 11:34:47.042446  619897 system_pods.go:89] "kube-proxy-gz2w6" [0fc0f78a-c81b-49f8-8f9d-647638c4a0b3] Running
	I0908 11:34:47.042450  619897 system_pods.go:89] "kube-scheduler-addons-960652" [959edcf3-2df7-4a58-b606-8d0876bcdc63] Running
	I0908 11:34:47.042455  619897 system_pods.go:89] "metrics-server-85b7d694d7-mc5nf" [564706d3-c633-40a6-9289-585f1e5a5c7d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 11:34:47.042463  619897 system_pods.go:89] "nvidia-device-plugin-daemonset-hckn5" [8a17fe7a-39e5-454c-bb05-d21c5cd2db0e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0908 11:34:47.042469  619897 system_pods.go:89] "registry-66898fdd98-zbdzw" [eb08f47e-6098-4caa-a39d-4250b569612a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 11:34:47.042476  619897 system_pods.go:89] "registry-creds-764b6fb674-7tdwk" [86622412-d7b1-4c72-9645-325fcf7990fe] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0908 11:34:47.042481  619897 system_pods.go:89] "registry-proxy-x79dt" [4e5f709b-3754-4bf9-a2ab-51c1f737b115] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0908 11:34:47.042486  619897 system_pods.go:89] "snapshot-controller-7d9fbc56b8-857s7" [a9e55fef-8cf4-4f53-848b-015009283c41] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 11:34:47.042491  619897 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8gd9h" [c3cd2f62-33df-4910-98f4-c986bbfb834d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 11:34:47.042497  619897 system_pods.go:89] "storage-provisioner" [8b8290c0-5abd-4d01-b1ff-2c6c84e01327] Running
	I0908 11:34:47.042506  619897 system_pods.go:126] duration metric: took 1.057074321s to wait for k8s-apps to be running ...
	I0908 11:34:47.042516  619897 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 11:34:47.042562  619897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 11:34:47.054907  619897 system_svc.go:56] duration metric: took 12.376844ms WaitForService to wait for kubelet
	I0908 11:34:47.054943  619897 kubeadm.go:578] duration metric: took 46.100563827s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 11:34:47.054970  619897 node_conditions.go:102] verifying NodePressure condition ...
	I0908 11:34:47.058332  619897 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0908 11:34:47.058365  619897 node_conditions.go:123] node cpu capacity is 8
	I0908 11:34:47.058382  619897 node_conditions.go:105] duration metric: took 3.406035ms to run NodePressure ...
	I0908 11:34:47.058398  619897 start.go:241] waiting for startup goroutines ...
	I0908 11:34:47.112208  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:47.112291  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:47.185283  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:47.487397  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:47.611447  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:47.611509  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:47.685302  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:47.986859  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:48.111790  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:48.111845  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:48.212537  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:48.485762  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:48.611216  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:48.611231  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:48.685185  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:48.987322  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:49.111952  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:49.112028  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:49.185099  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:49.487146  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:49.611357  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:49.611425  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:49.685831  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:49.986520  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:50.111235  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:50.111566  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:50.185322  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:50.487144  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:50.611525  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:50.611562  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:50.685861  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:50.987117  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:51.111479  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:51.111710  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:51.184953  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:51.487418  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:51.612146  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:51.612194  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:51.685130  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:51.987053  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:52.111261  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:52.111714  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:52.212321  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:52.486467  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:52.611596  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:52.611741  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:52.685289  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:52.987204  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:53.111519  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:53.111677  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:53.185517  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:53.486524  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:53.610506  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:53.610565  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:53.685787  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:53.986777  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:54.111402  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:54.111489  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:54.185264  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:54.486911  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:54.611931  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:54.612774  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:54.685987  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:54.987513  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:55.181760  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:55.182437  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:55.185733  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:55.487716  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:55.610951  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:55.611106  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:55.685506  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:55.986295  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:56.111778  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:56.111836  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:56.185666  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:56.486006  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:56.612033  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:56.612153  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:56.685264  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:56.987838  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:57.111598  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:57.111679  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:57.185794  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:57.487146  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:57.611478  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:57.611627  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:57.685360  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:57.986392  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:58.113269  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:58.113444  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:58.185387  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:58.485940  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:58.611186  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:58.611323  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:58.685104  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:58.986673  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:59.111323  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:59.111375  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:59.185515  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:59.486068  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:59.611784  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:59.611959  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:59.686378  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:59.985886  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:00.111043  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:00.111095  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:00.184961  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:00.486898  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:00.611151  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:00.611215  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:00.685620  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:00.987881  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:01.183971  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:01.184269  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:01.185678  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:01.587361  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:01.680444  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:01.680624  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:01.686608  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:01.989266  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:02.186296  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:02.186419  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:02.186868  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:02.488007  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:02.523966  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 11:35:02.679439  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:02.679721  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:02.685647  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:02.986649  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:03.111080  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:03.111248  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:03.185213  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:03.486816  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:03.612565  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:03.612575  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:03.686155  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:03.985572  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:04.111532  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:04.111675  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:04.185282  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:04.302946  619897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.778927349s)
	W0908 11:35:04.302999  619897 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:35:04.303056  619897 retry.go:31] will retry after 10.891744842s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:35:04.486625  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:04.610788  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:04.610928  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:04.685568  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:04.986935  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:05.111418  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:05.111588  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:05.185420  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:05.486018  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:05.612212  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:05.612425  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:05.685384  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:05.987895  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:06.111392  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:06.111412  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:06.185292  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:06.487043  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:06.678720  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:06.679201  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:06.688006  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:06.987196  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:07.111496  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:07.111639  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:07.186150  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:07.487047  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:07.611480  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:07.611717  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:07.685625  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:07.986497  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:08.110672  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:08.110849  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:08.185434  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:08.486033  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:08.611454  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:08.611457  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:08.685572  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:08.986627  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:09.110829  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:09.110857  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:09.186049  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:09.486768  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:09.610976  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:09.611024  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:09.685950  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:09.986636  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:10.112589  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:10.112669  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:10.185518  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:10.486748  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:10.611138  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:10.611328  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:10.685310  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:10.987624  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:11.111587  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:11.111615  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:11.185594  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:11.486221  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:11.612143  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:11.612579  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:11.684970  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:11.987834  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:12.111066  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:12.111278  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:12.184825  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:12.487162  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:12.679998  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:12.680905  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:12.684487  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:12.987067  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:13.179709  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:13.180519  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:13.185146  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:13.486970  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:13.612196  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:13.612343  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:13.685581  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:13.986180  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:14.111891  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:14.176713  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:14.185621  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:14.486342  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:14.611285  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:14.611510  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:14.685104  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:14.987483  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:15.111637  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:15.111778  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:15.185828  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:15.195953  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 11:35:15.486548  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:15.611725  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:15.611885  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:15.685709  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:15.986655  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:16.111631  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:16.112272  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:16.185813  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:16.280386  619897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.084382124s)
	W0908 11:35:16.280447  619897 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0908 11:35:16.280578  619897 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0908 11:35:16.487315  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:16.611752  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:16.611920  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:16.685493  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:16.986242  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:17.111693  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:17.111850  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:17.186256  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:17.487542  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:17.610619  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:17.610686  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:17.685867  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:17.986852  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:18.112297  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:18.112314  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:18.185017  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:18.486546  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:18.611918  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:18.611995  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:18.686360  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:18.987509  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:19.110955  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:19.110989  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:19.184874  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:19.487093  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:19.611420  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:19.611538  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:19.685946  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:19.986558  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:20.110946  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:20.110960  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:20.185360  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:20.485902  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:20.611223  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:20.611385  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:20.685562  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:20.986095  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:21.112201  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:21.112247  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:21.185236  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:21.487233  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:21.611512  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:21.611572  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:21.685341  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:21.986013  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:22.111708  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:22.111812  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:22.186103  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:22.487680  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:22.610921  619897 kapi.go:107] duration metric: took 1m15.004130743s to wait for kubernetes.io/minikube-addons=registry ...
	I0908 11:35:22.610957  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:22.686046  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:22.987155  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:23.111875  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:23.185740  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:23.486982  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:23.611197  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:23.685448  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:23.986470  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:24.111988  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:24.186543  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:24.489037  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:24.611464  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:24.685753  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:24.986773  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:25.111427  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:25.211590  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:25.486293  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:25.611514  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:25.685757  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:25.987023  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:26.111366  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:26.185923  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:26.487353  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:26.612346  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:26.686339  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:26.987274  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:27.112518  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:27.185724  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:27.485886  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:27.610782  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:27.685679  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:27.986530  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:28.111630  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:28.185958  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:28.486562  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:28.612135  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:28.685038  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:28.986555  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:29.112325  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:29.185439  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:29.486574  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:29.612598  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:29.685178  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:29.987054  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:30.111386  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:30.185406  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:30.487581  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:30.680737  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:30.685999  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:30.987513  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:31.197154  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:31.199299  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:31.487047  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:31.678905  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:31.685737  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:31.987274  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:32.180471  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:32.185408  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:32.487437  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:32.611728  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:32.687138  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:32.986554  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:33.111901  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:33.186086  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:33.487250  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:33.611564  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:33.686019  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:33.986813  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:34.111349  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:34.185495  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:34.486931  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:34.611021  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:34.686422  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:34.986348  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:35.111795  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:35.185954  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:35.487585  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:35.611819  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:35.686342  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:35.987090  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:36.111232  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:36.185706  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:36.492311  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:36.611578  619897 kapi.go:107] duration metric: took 1m29.004278672s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0908 11:35:36.685432  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:36.985719  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:37.185744  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:37.486677  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:37.686085  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:37.987106  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:38.187053  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:38.487258  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:38.685086  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:38.986733  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:39.185944  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:39.487125  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:39.685639  619897 kapi.go:107] duration metric: took 1m27.503813529s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0908 11:35:39.687783  619897 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-960652 cluster.
	I0908 11:35:39.689241  619897 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0908 11:35:39.690786  619897 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0908 11:35:39.985950  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:40.487312  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:40.985961  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:41.487275  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:41.986449  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:42.487192  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:42.986372  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:43.486807  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:43.987160  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:44.486339  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:44.987281  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:45.487135  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:45.986240  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:46.486918  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:46.986137  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:47.486893  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:47.985973  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:48.486492  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:48.986167  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:49.487105  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:49.986408  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:50.486565  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:50.986940  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:51.487397  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:51.987134  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:52.486338  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:52.986294  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:53.486897  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:53.987356  619897 kapi.go:107] duration metric: took 1m45.004908227s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0908 11:35:53.989102  619897 out.go:179] * Enabled addons: amd-gpu-device-plugin, storage-provisioner, ingress-dns, cloud-spanner, nvidia-device-plugin, registry-creds, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0908 11:35:53.990257  619897 addons.go:514] duration metric: took 1m53.03583503s for enable addons: enabled=[amd-gpu-device-plugin storage-provisioner ingress-dns cloud-spanner nvidia-device-plugin registry-creds metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0908 11:35:53.990314  619897 start.go:246] waiting for cluster config update ...
	I0908 11:35:53.990336  619897 start.go:255] writing updated cluster config ...
	I0908 11:35:53.990625  619897 ssh_runner.go:195] Run: rm -f paused
	I0908 11:35:53.994735  619897 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 11:35:53.998298  619897 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-np8sm" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:35:54.003374  619897 pod_ready.go:94] pod "coredns-66bc5c9577-np8sm" is "Ready"
	I0908 11:35:54.003404  619897 pod_ready.go:86] duration metric: took 5.078062ms for pod "coredns-66bc5c9577-np8sm" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:35:54.006058  619897 pod_ready.go:83] waiting for pod "etcd-addons-960652" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:35:54.010406  619897 pod_ready.go:94] pod "etcd-addons-960652" is "Ready"
	I0908 11:35:54.010431  619897 pod_ready.go:86] duration metric: took 4.342035ms for pod "etcd-addons-960652" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:35:54.012580  619897 pod_ready.go:83] waiting for pod "kube-apiserver-addons-960652" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:35:54.016715  619897 pod_ready.go:94] pod "kube-apiserver-addons-960652" is "Ready"
	I0908 11:35:54.016737  619897 pod_ready.go:86] duration metric: took 4.134176ms for pod "kube-apiserver-addons-960652" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:35:54.018644  619897 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-960652" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:35:54.398415  619897 pod_ready.go:94] pod "kube-controller-manager-addons-960652" is "Ready"
	I0908 11:35:54.398443  619897 pod_ready.go:86] duration metric: took 379.776043ms for pod "kube-controller-manager-addons-960652" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:35:54.599247  619897 pod_ready.go:83] waiting for pod "kube-proxy-gz2w6" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:35:54.999851  619897 pod_ready.go:94] pod "kube-proxy-gz2w6" is "Ready"
	I0908 11:35:54.999881  619897 pod_ready.go:86] duration metric: took 400.608241ms for pod "kube-proxy-gz2w6" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:35:55.199982  619897 pod_ready.go:83] waiting for pod "kube-scheduler-addons-960652" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:35:55.598935  619897 pod_ready.go:94] pod "kube-scheduler-addons-960652" is "Ready"
	I0908 11:35:55.598968  619897 pod_ready.go:86] duration metric: took 398.947233ms for pod "kube-scheduler-addons-960652" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:35:55.598984  619897 pod_ready.go:40] duration metric: took 1.604216868s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 11:35:55.645728  619897 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0908 11:35:55.647591  619897 out.go:179] * Done! kubectl is now configured to use "addons-960652" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 08 11:36:55 addons-960652 crio[1049]: time="2025-09-08 11:36:55.909445165Z" level=info msg="Stopping pod sandbox: 0b145657bd19a55c7c8ddff7a9715f03e527383af05506abc4a0016a9a6fabd7" id=14013a38-9e2d-4cf2-8be0-2ee6f22eb1f7 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 08 11:36:55 addons-960652 crio[1049]: time="2025-09-08 11:36:55.909520486Z" level=info msg="Stopped pod sandbox (already stopped): 0b145657bd19a55c7c8ddff7a9715f03e527383af05506abc4a0016a9a6fabd7" id=14013a38-9e2d-4cf2-8be0-2ee6f22eb1f7 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 08 11:36:55 addons-960652 crio[1049]: time="2025-09-08 11:36:55.909992871Z" level=info msg="Removing pod sandbox: 0b145657bd19a55c7c8ddff7a9715f03e527383af05506abc4a0016a9a6fabd7" id=21336dc4-ca9f-4dc4-94e5-35bae7bfdc76 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 08 11:36:55 addons-960652 crio[1049]: time="2025-09-08 11:36:55.917797500Z" level=info msg="Removed pod sandbox: 0b145657bd19a55c7c8ddff7a9715f03e527383af05506abc4a0016a9a6fabd7" id=21336dc4-ca9f-4dc4-94e5-35bae7bfdc76 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 08 11:36:55 addons-960652 crio[1049]: time="2025-09-08 11:36:55.918463518Z" level=info msg="Stopping pod sandbox: 828393f09964c1ffafc61fcae10c1d274fa151ca4230e18bec2d8c0d45a0cd84" id=c9784750-ff08-49ac-bf15-f2ed7896418e name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 08 11:36:55 addons-960652 crio[1049]: time="2025-09-08 11:36:55.918510312Z" level=info msg="Stopped pod sandbox (already stopped): 828393f09964c1ffafc61fcae10c1d274fa151ca4230e18bec2d8c0d45a0cd84" id=c9784750-ff08-49ac-bf15-f2ed7896418e name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 08 11:36:55 addons-960652 crio[1049]: time="2025-09-08 11:36:55.918872451Z" level=info msg="Removing pod sandbox: 828393f09964c1ffafc61fcae10c1d274fa151ca4230e18bec2d8c0d45a0cd84" id=3d4c9d9f-3cb7-4f12-87f4-c4b877a7d25f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 08 11:36:55 addons-960652 crio[1049]: time="2025-09-08 11:36:55.927067498Z" level=info msg="Removed pod sandbox: 828393f09964c1ffafc61fcae10c1d274fa151ca4230e18bec2d8c0d45a0cd84" id=3d4c9d9f-3cb7-4f12-87f4-c4b877a7d25f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 08 11:36:55 addons-960652 crio[1049]: time="2025-09-08 11:36:55.927639984Z" level=info msg="Stopping pod sandbox: a84ec66318c26effc781f3ed05dc1ea236bf47ef0af743c1fc83be3c54acd440" id=868efe55-4691-4908-b7ea-a5f4b3ad4068 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 08 11:36:55 addons-960652 crio[1049]: time="2025-09-08 11:36:55.927706232Z" level=info msg="Stopped pod sandbox (already stopped): a84ec66318c26effc781f3ed05dc1ea236bf47ef0af743c1fc83be3c54acd440" id=868efe55-4691-4908-b7ea-a5f4b3ad4068 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 08 11:36:55 addons-960652 crio[1049]: time="2025-09-08 11:36:55.928087869Z" level=info msg="Removing pod sandbox: a84ec66318c26effc781f3ed05dc1ea236bf47ef0af743c1fc83be3c54acd440" id=dd20f000-92be-45b5-947d-58469b343251 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 08 11:36:55 addons-960652 crio[1049]: time="2025-09-08 11:36:55.936018939Z" level=info msg="Removed pod sandbox: a84ec66318c26effc781f3ed05dc1ea236bf47ef0af743c1fc83be3c54acd440" id=dd20f000-92be-45b5-947d-58469b343251 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 08 11:37:36 addons-960652 crio[1049]: time="2025-09-08 11:37:36.691925983Z" level=info msg="Pulling image: docker.io/nginx:latest" id=ea1e60b2-b24b-4c59-85d1-12012f276514 name=/runtime.v1.ImageService/PullImage
	Sep 08 11:37:36 addons-960652 crio[1049]: time="2025-09-08 11:37:36.698491274Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Sep 08 11:38:31 addons-960652 crio[1049]: time="2025-09-08 11:38:31.692740682Z" level=info msg="Pulling image: docker.io/nginx:latest" id=92b0dff1-b509-4478-977e-030360331480 name=/runtime.v1.ImageService/PullImage
	Sep 08 11:38:31 addons-960652 crio[1049]: time="2025-09-08 11:38:31.699408866Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Sep 08 11:38:56 addons-960652 crio[1049]: time="2025-09-08 11:38:56.939443337Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-p2zlh/POD" id=6a71bda9-e759-4c02-9354-1ad877181aba name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 08 11:38:56 addons-960652 crio[1049]: time="2025-09-08 11:38:56.939533031Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 08 11:38:56 addons-960652 crio[1049]: time="2025-09-08 11:38:56.961431538Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-p2zlh Namespace:default ID:2417bc6ef0fdfa3cf858f41a5789aea1c9d62761294b87e996eba245b8685d27 UID:8a3572b2-f36f-4bfe-a4d4-6472fc661464 NetNS:/var/run/netns/70fe246f-c286-45bf-b601-832b567cffa3 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 08 11:38:56 addons-960652 crio[1049]: time="2025-09-08 11:38:56.961474602Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-p2zlh to CNI network \"kindnet\" (type=ptp)"
	Sep 08 11:38:56 addons-960652 crio[1049]: time="2025-09-08 11:38:56.973924302Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-p2zlh Namespace:default ID:2417bc6ef0fdfa3cf858f41a5789aea1c9d62761294b87e996eba245b8685d27 UID:8a3572b2-f36f-4bfe-a4d4-6472fc661464 NetNS:/var/run/netns/70fe246f-c286-45bf-b601-832b567cffa3 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 08 11:38:56 addons-960652 crio[1049]: time="2025-09-08 11:38:56.974121806Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-p2zlh for CNI network kindnet (type=ptp)"
	Sep 08 11:38:56 addons-960652 crio[1049]: time="2025-09-08 11:38:56.977910185Z" level=info msg="Ran pod sandbox 2417bc6ef0fdfa3cf858f41a5789aea1c9d62761294b87e996eba245b8685d27 with infra container: default/hello-world-app-5d498dc89-p2zlh/POD" id=6a71bda9-e759-4c02-9354-1ad877181aba name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 08 11:38:56 addons-960652 crio[1049]: time="2025-09-08 11:38:56.979610795Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=63c2c1e5-dec0-41bc-879a-54e07d9a2690 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:38:56 addons-960652 crio[1049]: time="2025-09-08 11:38:56.980061077Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=63c2c1e5-dec0-41bc-879a-54e07d9a2690 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	040337eeadd3e       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                                              2 minutes ago       Running             nginx                                    0                   cbe026d90e099       nginx
	3f1bd2139b6f5       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          2 minutes ago       Running             busybox                                  0                   ff5b5b008dfeb       busybox
	e225dd1353c5e       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          3 minutes ago       Running             csi-snapshotter                          0                   19bf91601224b       csi-hostpathplugin-x742h
	a3e0daff14684       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          3 minutes ago       Running             csi-provisioner                          0                   19bf91601224b       csi-hostpathplugin-x742h
	dc98ed0921633       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            3 minutes ago       Running             liveness-probe                           0                   19bf91601224b       csi-hostpathplugin-x742h
	2ad1b170c6333       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           3 minutes ago       Running             hostpath                                 0                   19bf91601224b       csi-hostpathplugin-x742h
	e9b58257111f3       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                3 minutes ago       Running             node-driver-registrar                    0                   19bf91601224b       csi-hostpathplugin-x742h
	d6f7e7940414a       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5768867a6776f266b9c9c6b8b32a069f9346493f7bede50c3dcb28859f36d506                            3 minutes ago       Running             gadget                                   0                   db85239541cfa       gadget-jrm97
	d2bd185c0ca11       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef                             3 minutes ago       Running             controller                               0                   f27a617a58a36       ingress-nginx-controller-9cc49f96f-9mrw4
	62b0c13b35dd5       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago       Running             csi-resizer                              0                   3aabeaf53b3c1       csi-hostpath-resizer-0
	f7f788c90a319       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago       Running             volume-snapshot-controller               0                   a030436d0600c       snapshot-controller-7d9fbc56b8-8gd9h
	a9adb3ab45f72       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago       Running             volume-snapshot-controller               0                   d8b6af65b1141       snapshot-controller-7d9fbc56b8-857s7
	475dea4776563       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago       Running             local-path-provisioner                   0                   c19df1d4f546e       local-path-provisioner-648f6765c9-2dzb2
	b52fd51e1bed1       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago       Running             minikube-ingress-dns                     0                   40fa7b6d17e62       kube-ingress-dns-minikube
	f3d401e158a14       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago       Running             csi-external-health-monitor-controller   0                   19bf91601224b       csi-hostpathplugin-x742h
	026097ca491b2       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             4 minutes ago       Running             csi-attacher                             0                   a190356160f21       csi-hostpath-attacher-0
	9db075b417a5b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   4 minutes ago       Exited              patch                                    0                   c530aa43b1c67       ingress-nginx-admission-patch-v5d42
	ec63213aa3e4d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   4 minutes ago       Exited              create                                   0                   6fbd0dac31f83       ingress-nginx-admission-create-llwpn
	15020f93f9117       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             4 minutes ago       Running             coredns                                  0                   28fe3efc23b77       coredns-66bc5c9577-np8sm
	197348fff491e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             4 minutes ago       Running             storage-provisioner                      0                   02a8ce96831d2       storage-provisioner
	5d7f0a6932d37       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             4 minutes ago       Running             kindnet-cni                              0                   3c6ebe59944d3       kindnet-hcvll
	599d0409ebde1       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                                             4 minutes ago       Running             kube-proxy                               0                   54443dfaf20e7       kube-proxy-gz2w6
	eb1813ea98da6       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             5 minutes ago       Running             etcd                                     0                   6114837aea7c4       etcd-addons-960652
	d629720d8e10e       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                                             5 minutes ago       Running             kube-scheduler                           0                   351287eb6cb47       kube-scheduler-addons-960652
	f34ea2a919bfd       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                                             5 minutes ago       Running             kube-controller-manager                  0                   63ffabb2c612c       kube-controller-manager-addons-960652
	b1c29c4267f5b       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                                             5 minutes ago       Running             kube-apiserver                           0                   e7a8b62c1e67e       kube-apiserver-addons-960652
	
	
	==> coredns [15020f93f911792a0cf0f70bbcb201066db25d4f586c88da47522ac9476fe8cd] <==
	[INFO] 10.244.0.13:59513 - 33616 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.006950088s
	[INFO] 10.244.0.13:50581 - 56941 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.007302128s
	[INFO] 10.244.0.13:50581 - 56405 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.008375517s
	[INFO] 10.244.0.13:52541 - 9508 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.006553206s
	[INFO] 10.244.0.13:52541 - 9148 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.006659506s
	[INFO] 10.244.0.13:56646 - 24638 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000167059s
	[INFO] 10.244.0.13:56646 - 24439 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000201692s
	[INFO] 10.244.0.21:43505 - 36048 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000255476s
	[INFO] 10.244.0.21:58790 - 30387 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000328403s
	[INFO] 10.244.0.21:36883 - 49243 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000178057s
	[INFO] 10.244.0.21:34350 - 21968 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000232217s
	[INFO] 10.244.0.21:48892 - 38215 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000157508s
	[INFO] 10.244.0.21:52913 - 47458 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000216863s
	[INFO] 10.244.0.21:56914 - 28194 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.005781461s
	[INFO] 10.244.0.21:32792 - 37129 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.00711672s
	[INFO] 10.244.0.21:46920 - 51434 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.005226545s
	[INFO] 10.244.0.21:40419 - 62920 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.007853906s
	[INFO] 10.244.0.21:41090 - 30393 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006746076s
	[INFO] 10.244.0.21:32954 - 18309 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006879666s
	[INFO] 10.244.0.21:53158 - 50628 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006161526s
	[INFO] 10.244.0.21:46401 - 41835 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007020861s
	[INFO] 10.244.0.21:34512 - 56978 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000919508s
	[INFO] 10.244.0.21:33364 - 39015 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.000931191s
	[INFO] 10.244.0.27:54986 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000339826s
	[INFO] 10.244.0.27:36962 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000204622s
	
	
	==> describe nodes <==
	Name:               addons-960652
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-960652
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a399eb27affc71ce2737faeeac659fc2ce938c64
	                    minikube.k8s.io/name=addons-960652
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T11_33_56_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-960652
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-960652"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 11:33:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-960652
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 11:38:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 11:36:59 +0000   Mon, 08 Sep 2025 11:33:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 11:36:59 +0000   Mon, 08 Sep 2025 11:33:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 11:36:59 +0000   Mon, 08 Sep 2025 11:33:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 11:36:59 +0000   Mon, 08 Sep 2025 11:34:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-960652
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	System Info:
	  Machine ID:                 d355e2ae02d844869ee2220acbc5d523
	  System UUID:                f41e624b-4547-4702-9350-59a549f70159
	  Boot ID:                    1bb31c1a-3b78-4ea1-9977-d0689f279875
	  Kernel Version:             5.15.0-1083-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	  default                     hello-world-app-5d498dc89-p2zlh             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  default                     task-pv-pod                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m9s
	  gadget                      gadget-jrm97                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-9mrw4    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m51s
	  kube-system                 coredns-66bc5c9577-np8sm                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m57s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 csi-hostpathplugin-x742h                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 etcd-addons-960652                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         5m3s
	  kube-system                 kindnet-hcvll                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m57s
	  kube-system                 kube-apiserver-addons-960652                250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 kube-controller-manager-addons-960652       200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 kube-proxy-gz2w6                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 kube-scheduler-addons-960652                100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 snapshot-controller-7d9fbc56b8-857s7        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 snapshot-controller-7d9fbc56b8-8gd9h        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  local-path-storage          local-path-provisioner-648f6765c9-2dzb2     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             310Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 4m51s                kube-proxy       
	  Normal   Starting                 5m9s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m9s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m9s (x8 over 5m9s)  kubelet          Node addons-960652 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m9s (x8 over 5m9s)  kubelet          Node addons-960652 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m9s (x8 over 5m9s)  kubelet          Node addons-960652 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m3s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m3s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m3s                 kubelet          Node addons-960652 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m3s                 kubelet          Node addons-960652 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m3s                 kubelet          Node addons-960652 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m59s                node-controller  Node addons-960652 event: Registered Node addons-960652 in Controller
	  Normal   NodeReady                4m13s                kubelet          Node addons-960652 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: 56 2e be 2d 78 d9 92 4f 95 93 e9 d3 08 00
	[  +0.003955] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-f00e2e5550ba
	[  +0.000005] ll header: 00000000: 56 2e be 2d 78 d9 92 4f 95 93 e9 d3 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-f00e2e5550ba
	[  +0.000001] ll header: 00000000: 56 2e be 2d 78 d9 92 4f 95 93 e9 d3 08 00
	[  +8.187305] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-f00e2e5550ba
	[  +0.000030] ll header: 00000000: 56 2e be 2d 78 d9 92 4f 95 93 e9 d3 08 00
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-f00e2e5550ba
	[  +0.000006] ll header: 00000000: 56 2e be 2d 78 d9 92 4f 95 93 e9 d3 08 00
	[  +0.000008] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-f00e2e5550ba
	[  +0.000002] ll header: 00000000: 56 2e be 2d 78 d9 92 4f 95 93 e9 d3 08 00
	[Sep 8 11:36] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fe e6 1d 9e e2 09 2a b3 fe f6 b6 ae 08 00
	[  +1.022122] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fe e6 1d 9e e2 09 2a b3 fe f6 b6 ae 08 00
	[  +2.019826] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fe e6 1d 9e e2 09 2a b3 fe f6 b6 ae 08 00
	[  +4.219629] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fe e6 1d 9e e2 09 2a b3 fe f6 b6 ae 08 00
	[Sep 8 11:37] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: fe e6 1d 9e e2 09 2a b3 fe f6 b6 ae 08 00
	[ +16.130550] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fe e6 1d 9e e2 09 2a b3 fe f6 b6 ae 08 00
	[ +33.273137] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fe e6 1d 9e e2 09 2a b3 fe f6 b6 ae 08 00
	
	
	==> etcd [eb1813ea98da6d09685dd2a84a69cc589435452d8367cd32201b9d54d966eaf9] <==
	{"level":"warn","ts":"2025-09-08T11:34:05.094092Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"316.065321ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/registry-creds\" limit:1 ","response":"range_response_count:1 size:7265"}
	{"level":"warn","ts":"2025-09-08T11:34:05.094171Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"294.854758ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-hcvll\" limit:1 ","response":"range_response_count:1 size:5305"}
	{"level":"warn","ts":"2025-09-08T11:34:05.094203Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.079115ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-09-08T11:34:05.095705Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"296.08127ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-08T11:34:05.099828Z","caller":"traceutil/trace.go:172","msg":"trace[375842253] range","detail":"{range_begin:/registry/deployments/kube-system/registry; range_end:; response_count:0; response_revision:434; }","duration":"417.715612ms","start":"2025-09-08T11:34:04.682098Z","end":"2025-09-08T11:34:05.099813Z","steps":["trace[375842253] 'agreement among raft nodes before linearized reading'  (duration: 299.511246ms)","trace[375842253] 'range keys from in-memory index tree'  (duration: 111.993014ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-08T11:34:05.177083Z","caller":"traceutil/trace.go:172","msg":"trace[4141782] range","detail":"{range_begin:/registry/deployments/kube-system/registry-creds; range_end:; response_count:1; response_revision:434; }","duration":"399.054807ms","start":"2025-09-08T11:34:04.778006Z","end":"2025-09-08T11:34:05.177061Z","steps":["trace[4141782] 'agreement among raft nodes before linearized reading'  (duration: 203.943485ms)","trace[4141782] 'range keys from in-memory index tree'  (duration: 112.022001ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-08T11:34:05.177216Z","caller":"traceutil/trace.go:172","msg":"trace[764876706] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-hcvll; range_end:; response_count:1; response_revision:434; }","duration":"377.897225ms","start":"2025-09-08T11:34:04.799310Z","end":"2025-09-08T11:34:05.177207Z","steps":["trace[764876706] 'agreement among raft nodes before linearized reading'  (duration: 182.632567ms)","trace[764876706] 'range keys from in-memory index tree'  (duration: 112.183126ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-08T11:34:05.178257Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-08T11:34:04.799289Z","time spent":"378.942309ms","remote":"127.0.0.1:55838","response type":"/etcdserverpb.KV/Range","request count":0,"request size":44,"response count":1,"response size":5329,"request content":"key:\"/registry/pods/kube-system/kindnet-hcvll\" limit:1 "}
	{"level":"warn","ts":"2025-09-08T11:34:05.178481Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-08T11:34:04.682090Z","time spent":"496.379004ms","remote":"127.0.0.1:56376","response type":"/etcdserverpb.KV/Range","request count":0,"request size":46,"response count":0,"response size":29,"request content":"key:\"/registry/deployments/kube-system/registry\" limit:1 "}
	{"level":"warn","ts":"2025-09-08T11:34:05.178629Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-08T11:34:04.777987Z","time spent":"400.612532ms","remote":"127.0.0.1:56376","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":1,"response size":7289,"request content":"key:\"/registry/deployments/kube-system/registry-creds\" limit:1 "}
	{"level":"info","ts":"2025-09-08T11:34:05.177398Z","caller":"traceutil/trace.go:172","msg":"trace[1196628434] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:0; response_revision:434; }","duration":"194.26934ms","start":"2025-09-08T11:34:04.983118Z","end":"2025-09-08T11:34:05.177387Z","steps":["trace[1196628434] 'range keys from in-memory index tree'  (duration: 111.003409ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T11:34:05.179505Z","caller":"traceutil/trace.go:172","msg":"trace[729211829] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:434; }","duration":"379.876324ms","start":"2025-09-08T11:34:04.799612Z","end":"2025-09-08T11:34:05.179488Z","steps":["trace[729211829] 'agreement among raft nodes before linearized reading'  (duration: 182.321546ms)","trace[729211829] 'range keys from in-memory index tree'  (duration: 113.742696ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-08T11:34:05.179756Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-08T11:34:04.799604Z","time spent":"380.134014ms","remote":"127.0.0.1:55740","response type":"/etcdserverpb.KV/Range","request count":0,"request size":25,"response count":0,"response size":29,"request content":"key:\"/registry/limitranges\" limit:1 "}
	{"level":"warn","ts":"2025-09-08T11:34:05.888187Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"102.580507ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/registry\" limit:1 ","response":"range_response_count:1 size:3351"}
	{"level":"info","ts":"2025-09-08T11:34:05.888370Z","caller":"traceutil/trace.go:172","msg":"trace[2075139565] range","detail":"{range_begin:/registry/deployments/kube-system/registry; range_end:; response_count:1; response_revision:495; }","duration":"102.774288ms","start":"2025-09-08T11:34:05.785577Z","end":"2025-09-08T11:34:05.888351Z","steps":["trace[2075139565] 'agreement among raft nodes before linearized reading'  (duration: 102.460147ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T11:34:06.981604Z","caller":"traceutil/trace.go:172","msg":"trace[140328556] transaction","detail":"{read_only:false; response_revision:611; number_of_response:1; }","duration":"104.602292ms","start":"2025-09-08T11:34:06.876983Z","end":"2025-09-08T11:34:06.981586Z","steps":["trace[140328556] 'process raft request'  (duration: 103.826859ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T11:34:06.982008Z","caller":"traceutil/trace.go:172","msg":"trace[1025344530] transaction","detail":"{read_only:false; response_revision:612; number_of_response:1; }","duration":"103.565338ms","start":"2025-09-08T11:34:06.878414Z","end":"2025-09-08T11:34:06.981979Z","steps":["trace[1025344530] 'process raft request'  (duration: 102.70568ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T11:34:06.982384Z","caller":"traceutil/trace.go:172","msg":"trace[40552992] transaction","detail":"{read_only:false; response_revision:613; number_of_response:1; }","duration":"103.629373ms","start":"2025-09-08T11:34:06.878738Z","end":"2025-09-08T11:34:06.982367Z","steps":["trace[40552992] 'process raft request'  (duration: 102.433256ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T11:34:09.399024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:34:09.419310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:34:29.926253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:34:29.933456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:34:29.957274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51944","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-08T11:35:25.876478Z","caller":"traceutil/trace.go:172","msg":"trace[623256269] transaction","detail":"{read_only:false; response_revision:1162; number_of_response:1; }","duration":"148.215321ms","start":"2025-09-08T11:35:25.728243Z","end":"2025-09-08T11:35:25.876458Z","steps":["trace[623256269] 'process raft request'  (duration: 148.037395ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T11:35:31.195182Z","caller":"traceutil/trace.go:172","msg":"trace[1241191236] transaction","detail":"{read_only:false; response_revision:1184; number_of_response:1; }","duration":"112.862332ms","start":"2025-09-08T11:35:31.082299Z","end":"2025-09-08T11:35:31.195161Z","steps":["trace[1241191236] 'process raft request'  (duration: 112.473492ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:38:58 up  2:21,  0 users,  load average: 0.41, 1.35, 3.66
	Linux addons-960652 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [5d7f0a6932d372d22b05cc05eee35bf50224e6b5f78b8ad555a511f89befb62e] <==
	I0908 11:36:55.299875       1 main.go:301] handling current node
	I0908 11:37:05.302327       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:37:05.302380       1 main.go:301] handling current node
	I0908 11:37:15.301294       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:37:15.301334       1 main.go:301] handling current node
	I0908 11:37:25.300170       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:37:25.300220       1 main.go:301] handling current node
	I0908 11:37:35.299748       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:37:35.299785       1 main.go:301] handling current node
	I0908 11:37:45.304616       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:37:45.304655       1 main.go:301] handling current node
	I0908 11:37:55.299841       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:37:55.299885       1 main.go:301] handling current node
	I0908 11:38:05.307046       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:38:05.307086       1 main.go:301] handling current node
	I0908 11:38:15.304465       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:38:15.304505       1 main.go:301] handling current node
	I0908 11:38:25.299847       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:38:25.299902       1 main.go:301] handling current node
	I0908 11:38:35.299186       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:38:35.299252       1 main.go:301] handling current node
	I0908 11:38:45.304460       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:38:45.304509       1 main.go:301] handling current node
	I0908 11:38:55.301670       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:38:55.301722       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b1c29c4267f5b8e8626d17c51f00a91014b42f7e5064ceb3dd7f9e9035b18520] <==
	W0908 11:34:45.502130       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.82.164:443: connect: connection refused
	E0908 11:34:45.502180       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.82.164:443: connect: connection refused" logger="UnhandledError"
	W0908 11:34:45.589958       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.82.164:443: connect: connection refused
	E0908 11:34:45.590003       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.82.164:443: connect: connection refused" logger="UnhandledError"
	E0908 11:34:52.059440       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.85.112:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.85.112:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.85.112:443: connect: connection refused" logger="UnhandledError"
	W0908 11:34:52.059554       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 11:34:52.059738       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0908 11:34:52.083750       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0908 11:34:56.265950       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:35:16.001662       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:36:04.201341       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0908 11:36:06.378192       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:47012: use of closed network connection
	E0908 11:36:06.553294       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:47030: use of closed network connection
	I0908 11:36:15.711979       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.0.75"}
	I0908 11:36:36.618832       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0908 11:36:36.818630       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.102.130"}
	I0908 11:36:38.404920       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:36:53.087754       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0908 11:37:05.409304       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:37:42.638086       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:38:35.019218       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:38:56.787115       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.162.248"}
	
	
	==> kube-controller-manager [f34ea2a919bfd72165d68dd34013bba56722328d504a8ed43f59355ccd0b9579] <==
	I0908 11:33:59.909671       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0908 11:33:59.909794       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-960652"
	I0908 11:33:59.909835       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0908 11:33:59.909850       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0908 11:33:59.909985       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0908 11:33:59.910133       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0908 11:33:59.910254       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0908 11:33:59.910289       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0908 11:33:59.911034       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0908 11:33:59.912920       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 11:33:59.914370       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0908 11:33:59.935049       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 11:34:05.498092       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="default/cloud-spanner-emulator" err="EndpointSlice informer cache is out of date"
	E0908 11:34:06.397177       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E0908 11:34:29.918356       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 11:34:29.918512       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0908 11:34:29.918553       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I0908 11:34:29.946811       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0908 11:34:29.950895       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I0908 11:34:30.018939       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 11:34:30.051123       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 11:34:49.986216       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0908 11:36:19.876777       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	I0908 11:36:41.521251       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I0908 11:36:44.128028       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	
	
	==> kube-proxy [599d0409ebde1f2bd14aa540b01ef6e24cd7100d818dbd600daed98df824357a] <==
	I0908 11:34:05.394445       1 server_linux.go:53] "Using iptables proxy"
	I0908 11:34:06.178921       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 11:34:06.279441       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 11:34:06.279503       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0908 11:34:06.279758       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 11:34:06.490818       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 11:34:06.490978       1 server_linux.go:132] "Using iptables Proxier"
	I0908 11:34:06.576226       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 11:34:06.576832       1 server.go:527] "Version info" version="v1.34.0"
	I0908 11:34:06.577304       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 11:34:06.579321       1 config.go:200] "Starting service config controller"
	I0908 11:34:06.581303       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 11:34:06.580020       1 config.go:106] "Starting endpoint slice config controller"
	I0908 11:34:06.581494       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 11:34:06.580647       1 config.go:309] "Starting node config controller"
	I0908 11:34:06.581583       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 11:34:06.581615       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 11:34:06.580045       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 11:34:06.581691       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 11:34:06.681528       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 11:34:06.681735       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0908 11:34:06.681780       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [d629720d8e10ef56f43bfb64a71c210e8766e43becacccb06e78365c2f7da60e] <==
	E0908 11:33:53.182101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0908 11:33:53.182256       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0908 11:33:53.182425       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0908 11:33:53.182546       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0908 11:33:53.182774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0908 11:33:53.182812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0908 11:33:53.182888       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0908 11:33:53.182980       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0908 11:33:53.183058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0908 11:33:53.183171       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0908 11:33:53.183286       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0908 11:33:53.183413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0908 11:33:53.183468       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0908 11:33:53.183521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0908 11:33:53.183568       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0908 11:33:53.185770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0908 11:33:54.007333       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0908 11:33:54.058736       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0908 11:33:54.096543       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0908 11:33:54.111349       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0908 11:33:54.136614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0908 11:33:54.149771       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0908 11:33:54.259506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0908 11:33:54.259505       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I0908 11:33:54.598099       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 08 11:38:05 addons-960652 kubelet[1680]: E0908 11:38:05.836286    1680 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757331485835948739  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:579278}  inodes_used:{value:224}}"
	Sep 08 11:38:05 addons-960652 kubelet[1680]: E0908 11:38:05.836330    1680 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757331485835948739  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:579278}  inodes_used:{value:224}}"
	Sep 08 11:38:06 addons-960652 kubelet[1680]: E0908 11:38:06.784352    1680 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 08 11:38:06 addons-960652 kubelet[1680]: E0908 11:38:06.784414    1680 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 08 11:38:06 addons-960652 kubelet[1680]: E0908 11:38:06.784510    1680 kuberuntime_manager.go:1449] "Unhandled Error" err="container task-pv-container start failed in pod task-pv-pod_default(990d30a1-800d-4554-930c-b8e09bd450c0): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 08 11:38:06 addons-960652 kubelet[1680]: E0908 11:38:06.784546    1680 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="990d30a1-800d-4554-930c-b8e09bd450c0"
	Sep 08 11:38:15 addons-960652 kubelet[1680]: E0908 11:38:15.838228    1680 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757331495837939796  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:579278}  inodes_used:{value:224}}"
	Sep 08 11:38:15 addons-960652 kubelet[1680]: E0908 11:38:15.838266    1680 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757331495837939796  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:579278}  inodes_used:{value:224}}"
	Sep 08 11:38:20 addons-960652 kubelet[1680]: E0908 11:38:20.691373    1680 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="990d30a1-800d-4554-930c-b8e09bd450c0"
	Sep 08 11:38:25 addons-960652 kubelet[1680]: E0908 11:38:25.840350    1680 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757331505840099403  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:579278}  inodes_used:{value:224}}"
	Sep 08 11:38:25 addons-960652 kubelet[1680]: E0908 11:38:25.840382    1680 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757331505840099403  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:579278}  inodes_used:{value:224}}"
	Sep 08 11:38:35 addons-960652 kubelet[1680]: I0908 11:38:35.692527    1680 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 08 11:38:35 addons-960652 kubelet[1680]: E0908 11:38:35.842882    1680 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757331515842649184  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:579278}  inodes_used:{value:224}}"
	Sep 08 11:38:35 addons-960652 kubelet[1680]: E0908 11:38:35.842915    1680 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757331515842649184  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:579278}  inodes_used:{value:224}}"
	Sep 08 11:38:45 addons-960652 kubelet[1680]: E0908 11:38:45.845721    1680 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757331525845329173  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:579278}  inodes_used:{value:224}}"
	Sep 08 11:38:45 addons-960652 kubelet[1680]: E0908 11:38:45.845769    1680 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757331525845329173  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:579278}  inodes_used:{value:224}}"
	Sep 08 11:38:55 addons-960652 kubelet[1680]: E0908 11:38:55.781253    1680 container_manager_linux.go:562] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /docker/24f37931d688e060247598edc913e26a6e1a56e62b8bca494db47e4b663eef6c, memory: /docker/24f37931d688e060247598edc913e26a6e1a56e62b8bca494db47e4b663eef6c/system.slice/kubelet.service"
	Sep 08 11:38:55 addons-960652 kubelet[1680]: E0908 11:38:55.848089    1680 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757331535847722123  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:579278}  inodes_used:{value:224}}"
	Sep 08 11:38:55 addons-960652 kubelet[1680]: E0908 11:38:55.848124    1680 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757331535847722123  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:579278}  inodes_used:{value:224}}"
	Sep 08 11:38:55 addons-960652 kubelet[1680]: E0908 11:38:55.894908    1680 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/8633fff3ac5c5249223e0ae1a529423b49c29c308098a04b75b05429f99f74a0/diff" to get inode usage: stat /var/lib/containers/storage/overlay/8633fff3ac5c5249223e0ae1a529423b49c29c308098a04b75b05429f99f74a0/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 11:38:55 addons-960652 kubelet[1680]: E0908 11:38:55.898202    1680 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/f2b60caab235f0363853803cf1eaf4debeeac939ceb22756381fcc6dbd0bb858/diff" to get inode usage: stat /var/lib/containers/storage/overlay/f2b60caab235f0363853803cf1eaf4debeeac939ceb22756381fcc6dbd0bb858/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 11:38:55 addons-960652 kubelet[1680]: E0908 11:38:55.904874    1680 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/7aebf8a4db0f59812ee718335786b59af5e3a4427b74002ab68d29555683b63a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/7aebf8a4db0f59812ee718335786b59af5e3a4427b74002ab68d29555683b63a/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 11:38:55 addons-960652 kubelet[1680]: E0908 11:38:55.906041    1680 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/8633fff3ac5c5249223e0ae1a529423b49c29c308098a04b75b05429f99f74a0/diff" to get inode usage: stat /var/lib/containers/storage/overlay/8633fff3ac5c5249223e0ae1a529423b49c29c308098a04b75b05429f99f74a0/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 11:38:56 addons-960652 kubelet[1680]: I0908 11:38:56.776378    1680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp9ts\" (UniqueName: \"kubernetes.io/projected/8a3572b2-f36f-4bfe-a4d4-6472fc661464-kube-api-access-zp9ts\") pod \"hello-world-app-5d498dc89-p2zlh\" (UID: \"8a3572b2-f36f-4bfe-a4d4-6472fc661464\") " pod="default/hello-world-app-5d498dc89-p2zlh"
	Sep 08 11:38:56 addons-960652 kubelet[1680]: W0908 11:38:56.976496    1680 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/24f37931d688e060247598edc913e26a6e1a56e62b8bca494db47e4b663eef6c/crio-2417bc6ef0fdfa3cf858f41a5789aea1c9d62761294b87e996eba245b8685d27 WatchSource:0}: Error finding container 2417bc6ef0fdfa3cf858f41a5789aea1c9d62761294b87e996eba245b8685d27: Status 404 returned error can't find the container with id 2417bc6ef0fdfa3cf858f41a5789aea1c9d62761294b87e996eba245b8685d27
	
	
	==> storage-provisioner [197348fff491ed8e82c4dfed081b5664cda583220b70b29b5f53b687db96e7ab] <==
	W0908 11:38:34.106012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:38:36.109721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:38:36.114489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:38:38.118417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:38:38.124468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:38:40.128636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:38:40.133282       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:38:42.136502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:38:42.141533       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:38:44.144521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:38:44.149180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:38:46.152626       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:38:46.156847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:38:48.160419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:38:48.166783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:38:50.169774       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:38:50.175166       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:38:52.178637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:38:52.183298       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:38:54.186583       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:38:54.192287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:38:56.195460       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:38:56.201356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:38:58.204957       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:38:58.209856       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-960652 -n addons-960652
helpers_test.go:269: (dbg) Run:  kubectl --context addons-960652 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-p2zlh task-pv-pod ingress-nginx-admission-create-llwpn ingress-nginx-admission-patch-v5d42
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-960652 describe pod hello-world-app-5d498dc89-p2zlh task-pv-pod ingress-nginx-admission-create-llwpn ingress-nginx-admission-patch-v5d42
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-960652 describe pod hello-world-app-5d498dc89-p2zlh task-pv-pod ingress-nginx-admission-create-llwpn ingress-nginx-admission-patch-v5d42: exit status 1 (79.125986ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-p2zlh
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-960652/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 11:38:56 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zp9ts (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-zp9ts:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-p2zlh to addons-960652
	  Normal  Pulling    3s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-960652/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 11:36:49 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.30
	IPs:
	  IP:  10.244.0.30
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wx6s5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-wx6s5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  2m10s               default-scheduler  Successfully assigned default/task-pv-pod to addons-960652
	  Warning  Failed     99s                 kubelet            Failed to pull image "docker.io/nginx": initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     53s (x2 over 99s)   kubelet            Error: ErrImagePull
	  Warning  Failed     53s                 kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    39s (x2 over 99s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     39s (x2 over 99s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    28s (x3 over 2m9s)  kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-llwpn" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-v5d42" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-960652 describe pod hello-world-app-5d498dc89-p2zlh task-pv-pod ingress-nginx-admission-create-llwpn ingress-nginx-admission-patch-v5d42: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-960652 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-960652 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-960652 addons disable ingress --alsologtostderr -v=1: (7.738662093s)
--- FAIL: TestAddons/parallel/Ingress (151.63s)

                                                
                                    
x
+
TestAddons/parallel/CSI (387.49s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0908 11:36:32.613329  618620 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0908 11:36:32.617326  618620 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0908 11:36:32.617355  618620 kapi.go:107] duration metric: took 4.061952ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 4.07393ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-960652 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-960652 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-960652 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-960652 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-960652 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-960652 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-960652 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-960652 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-960652 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-960652 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-960652 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-960652 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-960652 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-960652 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-960652 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-960652 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-960652 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-960652 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-960652 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-960652 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [990d30a1-800d-4554-930c-b8e09bd450c0] Pending
helpers_test.go:352: "task-pv-pod" [990d30a1-800d-4554-930c-b8e09bd450c0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:337: TestAddons/parallel/CSI: WARNING: pod list for "default" "app=task-pv-pod" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:567: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:567: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-960652 -n addons-960652
addons_test.go:567: TestAddons/parallel/CSI: showing logs for failed pods as of 2025-09-08 11:42:50.257233733 +0000 UTC m=+589.549976201
addons_test.go:567: (dbg) Run:  kubectl --context addons-960652 describe po task-pv-pod -n default
addons_test.go:567: (dbg) kubectl --context addons-960652 describe po task-pv-pod -n default:
Name:             task-pv-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-960652/192.168.49.2
Start Time:       Mon, 08 Sep 2025 11:36:49 +0000
Labels:           app=task-pv-pod
Annotations:      <none>
Status:           Pending
IP:               10.244.0.30
IPs:
IP:  10.244.0.30
Containers:
task-pv-container:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wx6s5 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc
ReadOnly:   false
kube-api-access-wx6s5:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  6m1s                  default-scheduler  Successfully assigned default/task-pv-pod to addons-960652
Warning  Failed     5m30s                 kubelet            Failed to pull image "docker.io/nginx": initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     2m4s (x4 over 5m30s)  kubelet            Error: ErrImagePull
Warning  Failed     2m4s (x3 over 4m44s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    55s (x10 over 5m30s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     55s (x10 over 5m30s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    43s (x5 over 6m)      kubelet            Pulling image "docker.io/nginx"
addons_test.go:567: (dbg) Run:  kubectl --context addons-960652 logs task-pv-pod -n default
addons_test.go:567: (dbg) Non-zero exit: kubectl --context addons-960652 logs task-pv-pod -n default: exit status 1 (72.266175ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:567: kubectl --context addons-960652 logs task-pv-pod -n default: exit status 1
addons_test.go:568: failed waiting for pod task-pv-pod: app=task-pv-pod within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/CSI]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/CSI]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-960652
helpers_test.go:243: (dbg) docker inspect addons-960652:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "24f37931d688e060247598edc913e26a6e1a56e62b8bca494db47e4b663eef6c",
	        "Created": "2025-09-08T11:33:39.320264797Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 620519,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-08T11:33:39.358365869Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:863fa02c4a7dcd4571b30c16c1e6ae3eaa1d1a904931aac9472133ae3328c098",
	        "ResolvConfPath": "/var/lib/docker/containers/24f37931d688e060247598edc913e26a6e1a56e62b8bca494db47e4b663eef6c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/24f37931d688e060247598edc913e26a6e1a56e62b8bca494db47e4b663eef6c/hostname",
	        "HostsPath": "/var/lib/docker/containers/24f37931d688e060247598edc913e26a6e1a56e62b8bca494db47e4b663eef6c/hosts",
	        "LogPath": "/var/lib/docker/containers/24f37931d688e060247598edc913e26a6e1a56e62b8bca494db47e4b663eef6c/24f37931d688e060247598edc913e26a6e1a56e62b8bca494db47e4b663eef6c-json.log",
	        "Name": "/addons-960652",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-960652:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-960652",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "24f37931d688e060247598edc913e26a6e1a56e62b8bca494db47e4b663eef6c",
	                "LowerDir": "/var/lib/docker/overlay2/2dea81149420091d81a76eb8de5f710286b08de429c3af55a4042752a3c447f2-init/diff:/var/lib/docker/overlay2/e9bf8e09f770f60ed4c810e0202629dccf6a4c304822cce5a8dffd900ae12eff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2dea81149420091d81a76eb8de5f710286b08de429c3af55a4042752a3c447f2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2dea81149420091d81a76eb8de5f710286b08de429c3af55a4042752a3c447f2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2dea81149420091d81a76eb8de5f710286b08de429c3af55a4042752a3c447f2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-960652",
	                "Source": "/var/lib/docker/volumes/addons-960652/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-960652",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-960652",
	                "name.minikube.sigs.k8s.io": "addons-960652",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0aa3c1b7418107fc0ec83e378475db5d130bc5f9ca8c6af4ddb7d24724f95ec1",
	            "SandboxKey": "/var/run/docker/netns/0aa3c1b74181",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-960652": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:92:16:85:a6:a4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "45fdf7b9b342569c8c85c165d0b4cad936d009232c6be61e085655511c342d62",
	                    "EndpointID": "801af4b3003c918ad01636a6f4e3619c580ea9c686a26ac0443ff42863cdc68f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-960652",
	                        "24f37931d688"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-960652 -n addons-960652
helpers_test.go:252: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-960652 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-960652 logs -n 25: (1.252011707s)
helpers_test.go:260: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-922073 --alsologtostderr --binary-mirror http://127.0.0.1:41749 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-922073 │ jenkins │ v1.36.0 │ 08 Sep 25 11:33 UTC │                     │
	│ delete  │ -p binary-mirror-922073                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-922073 │ jenkins │ v1.36.0 │ 08 Sep 25 11:33 UTC │ 08 Sep 25 11:33 UTC │
	│ addons  │ enable dashboard -p addons-960652                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-960652        │ jenkins │ v1.36.0 │ 08 Sep 25 11:33 UTC │                     │
	│ addons  │ disable dashboard -p addons-960652                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-960652        │ jenkins │ v1.36.0 │ 08 Sep 25 11:33 UTC │                     │
	│ start   │ -p addons-960652 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-960652        │ jenkins │ v1.36.0 │ 08 Sep 25 11:33 UTC │ 08 Sep 25 11:35 UTC │
	│ addons  │ addons-960652 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-960652        │ jenkins │ v1.36.0 │ 08 Sep 25 11:35 UTC │ 08 Sep 25 11:35 UTC │
	│ addons  │ addons-960652 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-960652        │ jenkins │ v1.36.0 │ 08 Sep 25 11:36 UTC │ 08 Sep 25 11:36 UTC │
	│ addons  │ enable headlamp -p addons-960652 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-960652        │ jenkins │ v1.36.0 │ 08 Sep 25 11:36 UTC │ 08 Sep 25 11:36 UTC │
	│ addons  │ addons-960652 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-960652        │ jenkins │ v1.36.0 │ 08 Sep 25 11:36 UTC │ 08 Sep 25 11:36 UTC │
	│ ssh     │ addons-960652 ssh cat /opt/local-path-provisioner/pvc-a4a34cf8-0045-4c4f-ba4a-0035da17388c_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-960652        │ jenkins │ v1.36.0 │ 08 Sep 25 11:36 UTC │ 08 Sep 25 11:36 UTC │
	│ addons  │ addons-960652 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-960652        │ jenkins │ v1.36.0 │ 08 Sep 25 11:36 UTC │ 08 Sep 25 11:36 UTC │
	│ addons  │ addons-960652 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-960652        │ jenkins │ v1.36.0 │ 08 Sep 25 11:36 UTC │ 08 Sep 25 11:36 UTC │
	│ ip      │ addons-960652 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-960652        │ jenkins │ v1.36.0 │ 08 Sep 25 11:36 UTC │ 08 Sep 25 11:36 UTC │
	│ addons  │ addons-960652 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-960652        │ jenkins │ v1.36.0 │ 08 Sep 25 11:36 UTC │ 08 Sep 25 11:36 UTC │
	│ addons  │ addons-960652 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-960652        │ jenkins │ v1.36.0 │ 08 Sep 25 11:36 UTC │ 08 Sep 25 11:36 UTC │
	│ addons  │ addons-960652 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-960652        │ jenkins │ v1.36.0 │ 08 Sep 25 11:36 UTC │ 08 Sep 25 11:36 UTC │
	│ addons  │ addons-960652 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-960652        │ jenkins │ v1.36.0 │ 08 Sep 25 11:36 UTC │ 08 Sep 25 11:36 UTC │
	│ addons  │ addons-960652 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-960652        │ jenkins │ v1.36.0 │ 08 Sep 25 11:36 UTC │ 08 Sep 25 11:36 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-960652                                                                                                                                                                                                                                                                                                                                                                                           │ addons-960652        │ jenkins │ v1.36.0 │ 08 Sep 25 11:36 UTC │ 08 Sep 25 11:36 UTC │
	│ addons  │ addons-960652 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-960652        │ jenkins │ v1.36.0 │ 08 Sep 25 11:36 UTC │ 08 Sep 25 11:36 UTC │
	│ addons  │ addons-960652 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-960652        │ jenkins │ v1.36.0 │ 08 Sep 25 11:36 UTC │ 08 Sep 25 11:36 UTC │
	│ ssh     │ addons-960652 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-960652        │ jenkins │ v1.36.0 │ 08 Sep 25 11:36 UTC │                     │
	│ ip      │ addons-960652 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-960652        │ jenkins │ v1.36.0 │ 08 Sep 25 11:38 UTC │ 08 Sep 25 11:38 UTC │
	│ addons  │ addons-960652 addons disable ingress-dns --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-960652        │ jenkins │ v1.36.0 │ 08 Sep 25 11:38 UTC │ 08 Sep 25 11:39 UTC │
	│ addons  │ addons-960652 addons disable ingress --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-960652        │ jenkins │ v1.36.0 │ 08 Sep 25 11:39 UTC │ 08 Sep 25 11:39 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 11:33:14
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 11:33:14.318482  619897 out.go:360] Setting OutFile to fd 1 ...
	I0908 11:33:14.318736  619897 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:33:14.318745  619897 out.go:374] Setting ErrFile to fd 2...
	I0908 11:33:14.318750  619897 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:33:14.318954  619897 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-614854/.minikube/bin
	I0908 11:33:14.319579  619897 out.go:368] Setting JSON to false
	I0908 11:33:14.320529  619897 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":8138,"bootTime":1757323056,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 11:33:14.320645  619897 start.go:140] virtualization: kvm guest
	I0908 11:33:14.322603  619897 out.go:179] * [addons-960652] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 11:33:14.324029  619897 out.go:179]   - MINIKUBE_LOCATION=21512
	I0908 11:33:14.324043  619897 notify.go:220] Checking for updates...
	I0908 11:33:14.325571  619897 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 11:33:14.326863  619897 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21512-614854/kubeconfig
	I0908 11:33:14.328099  619897 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-614854/.minikube
	I0908 11:33:14.329398  619897 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 11:33:14.330822  619897 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 11:33:14.332263  619897 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 11:33:14.355754  619897 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 11:33:14.355870  619897 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 11:33:14.408213  619897 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:48 SystemTime:2025-09-08 11:33:14.397941321 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 11:33:14.408343  619897 docker.go:318] overlay module found
	I0908 11:33:14.410248  619897 out.go:179] * Using the docker driver based on user configuration
	I0908 11:33:14.411724  619897 start.go:304] selected driver: docker
	I0908 11:33:14.411746  619897 start.go:918] validating driver "docker" against <nil>
	I0908 11:33:14.411761  619897 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 11:33:14.412687  619897 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 11:33:14.462076  619897 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:48 SystemTime:2025-09-08 11:33:14.45326667 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 11:33:14.462299  619897 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 11:33:14.462601  619897 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 11:33:14.464297  619897 out.go:179] * Using Docker driver with root privileges
	I0908 11:33:14.465630  619897 cni.go:84] Creating CNI manager for ""
	I0908 11:33:14.465714  619897 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 11:33:14.465729  619897 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0908 11:33:14.465825  619897 start.go:348] cluster config:
	{Name:addons-960652 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-960652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0908 11:33:14.467367  619897 out.go:179] * Starting "addons-960652" primary control-plane node in "addons-960652" cluster
	I0908 11:33:14.468505  619897 cache.go:123] Beginning downloading kic base image for docker with crio
	I0908 11:33:14.469665  619897 out.go:179] * Pulling base image v0.0.47-1756980985-21488 ...
	I0908 11:33:14.470730  619897 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 11:33:14.470768  619897 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21512-614854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0908 11:33:14.470778  619897 cache.go:58] Caching tarball of preloaded images
	I0908 11:33:14.470835  619897 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 11:33:14.470894  619897 preload.go:172] Found /home/jenkins/minikube-integration/21512-614854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0908 11:33:14.470907  619897 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0908 11:33:14.471281  619897 profile.go:143] Saving config to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/config.json ...
	I0908 11:33:14.471312  619897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/config.json: {Name:mk63e696e8d863718ad39ec8567b26250dce130a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:33:14.488199  619897 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 to local cache
	I0908 11:33:14.488333  619897 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory
	I0908 11:33:14.488352  619897 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory, skipping pull
	I0908 11:33:14.488357  619897 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in cache, skipping pull
	I0908 11:33:14.488366  619897 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 as a tarball
	I0908 11:33:14.488373  619897 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 from local cache
	I0908 11:33:27.108889  619897 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 from cached tarball
	I0908 11:33:27.108933  619897 cache.go:232] Successfully downloaded all kic artifacts
	I0908 11:33:27.108976  619897 start.go:360] acquireMachinesLock for addons-960652: {Name:mk9214c1ac5ed01d58429ac05ff6466e746c07e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 11:33:27.109145  619897 start.go:364] duration metric: took 138.48µs to acquireMachinesLock for "addons-960652"
	I0908 11:33:27.109185  619897 start.go:93] Provisioning new machine with config: &{Name:addons-960652 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-960652 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 11:33:27.109276  619897 start.go:125] createHost starting for "" (driver="docker")
	I0908 11:33:27.111289  619897 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0908 11:33:27.111576  619897 start.go:159] libmachine.API.Create for "addons-960652" (driver="docker")
	I0908 11:33:27.111623  619897 client.go:168] LocalClient.Create starting
	I0908 11:33:27.111794  619897 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem
	I0908 11:33:27.211527  619897 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/cert.pem
	I0908 11:33:27.622080  619897 cli_runner.go:164] Run: docker network inspect addons-960652 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0908 11:33:27.639449  619897 cli_runner.go:211] docker network inspect addons-960652 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0908 11:33:27.639520  619897 network_create.go:284] running [docker network inspect addons-960652] to gather additional debugging logs...
	I0908 11:33:27.639546  619897 cli_runner.go:164] Run: docker network inspect addons-960652
	W0908 11:33:27.657247  619897 cli_runner.go:211] docker network inspect addons-960652 returned with exit code 1
	I0908 11:33:27.657281  619897 network_create.go:287] error running [docker network inspect addons-960652]: docker network inspect addons-960652: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-960652 not found
	I0908 11:33:27.657297  619897 network_create.go:289] output of [docker network inspect addons-960652]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-960652 not found
	
	** /stderr **
	I0908 11:33:27.657445  619897 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 11:33:27.675352  619897 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fac3d0}
	I0908 11:33:27.675404  619897 network_create.go:124] attempt to create docker network addons-960652 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0908 11:33:27.675451  619897 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-960652 addons-960652
	I0908 11:33:27.728660  619897 network_create.go:108] docker network addons-960652 192.168.49.0/24 created
	I0908 11:33:27.728693  619897 kic.go:121] calculated static IP "192.168.49.2" for the "addons-960652" container
	I0908 11:33:27.728762  619897 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0908 11:33:27.746243  619897 cli_runner.go:164] Run: docker volume create addons-960652 --label name.minikube.sigs.k8s.io=addons-960652 --label created_by.minikube.sigs.k8s.io=true
	I0908 11:33:27.764714  619897 oci.go:103] Successfully created a docker volume addons-960652
	I0908 11:33:27.764830  619897 cli_runner.go:164] Run: docker run --rm --name addons-960652-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-960652 --entrypoint /usr/bin/test -v addons-960652:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -d /var/lib
	I0908 11:33:34.695476  619897 cli_runner.go:217] Completed: docker run --rm --name addons-960652-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-960652 --entrypoint /usr/bin/test -v addons-960652:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -d /var/lib: (6.930597146s)
	I0908 11:33:34.695519  619897 oci.go:107] Successfully prepared a docker volume addons-960652
	I0908 11:33:34.695548  619897 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 11:33:34.695600  619897 kic.go:194] Starting extracting preloaded images to volume ...
	I0908 11:33:34.695685  619897 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21512-614854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-960652:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0908 11:33:39.251406  619897 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21512-614854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-960652:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -I lz4 -xf /preloaded.tar -C /extractDir: (4.555670977s)
	I0908 11:33:39.251440  619897 kic.go:203] duration metric: took 4.555836685s to extract preloaded images to volume ...
	W0908 11:33:39.251887  619897 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0908 11:33:39.252127  619897 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0908 11:33:39.303821  619897 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-960652 --name addons-960652 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-960652 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-960652 --network addons-960652 --ip 192.168.49.2 --volume addons-960652:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79
	I0908 11:33:39.592070  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Running}}
	I0908 11:33:39.612774  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	I0908 11:33:39.632684  619897 cli_runner.go:164] Run: docker exec addons-960652 stat /var/lib/dpkg/alternatives/iptables
	I0908 11:33:39.677281  619897 oci.go:144] the created container "addons-960652" has a running status.
	I0908 11:33:39.677318  619897 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa...
	I0908 11:33:40.245857  619897 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0908 11:33:40.267352  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	I0908 11:33:40.286678  619897 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0908 11:33:40.286705  619897 kic_runner.go:114] Args: [docker exec --privileged addons-960652 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0908 11:33:40.332091  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	I0908 11:33:40.350583  619897 machine.go:93] provisionDockerMachine start ...
	I0908 11:33:40.350710  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:33:40.369620  619897 main.go:141] libmachine: Using SSH client type: native
	I0908 11:33:40.369939  619897 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0908 11:33:40.369954  619897 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 11:33:40.491619  619897 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-960652
	
	I0908 11:33:40.491674  619897 ubuntu.go:182] provisioning hostname "addons-960652"
	I0908 11:33:40.491749  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:33:40.509332  619897 main.go:141] libmachine: Using SSH client type: native
	I0908 11:33:40.509560  619897 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0908 11:33:40.509576  619897 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-960652 && echo "addons-960652" | sudo tee /etc/hostname
	I0908 11:33:40.639767  619897 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-960652
	
	I0908 11:33:40.639848  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:33:40.659283  619897 main.go:141] libmachine: Using SSH client type: native
	I0908 11:33:40.659709  619897 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0908 11:33:40.659749  619897 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-960652' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-960652/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-960652' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 11:33:40.784357  619897 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 11:33:40.784397  619897 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21512-614854/.minikube CaCertPath:/home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21512-614854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21512-614854/.minikube}
	I0908 11:33:40.784448  619897 ubuntu.go:190] setting up certificates
	I0908 11:33:40.784468  619897 provision.go:84] configureAuth start
	I0908 11:33:40.784537  619897 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-960652
	I0908 11:33:40.802972  619897 provision.go:143] copyHostCerts
	I0908 11:33:40.803073  619897 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21512-614854/.minikube/ca.pem (1082 bytes)
	I0908 11:33:40.803228  619897 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21512-614854/.minikube/cert.pem (1123 bytes)
	I0908 11:33:40.803329  619897 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21512-614854/.minikube/key.pem (1675 bytes)
	I0908 11:33:40.803406  619897 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21512-614854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca-key.pem org=jenkins.addons-960652 san=[127.0.0.1 192.168.49.2 addons-960652 localhost minikube]
	I0908 11:33:41.038169  619897 provision.go:177] copyRemoteCerts
	I0908 11:33:41.038254  619897 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 11:33:41.038314  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:33:41.056946  619897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa Username:docker}
	I0908 11:33:41.149752  619897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 11:33:41.176267  619897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0908 11:33:41.202552  619897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0908 11:33:41.227486  619897 provision.go:87] duration metric: took 442.998521ms to configureAuth
	I0908 11:33:41.227518  619897 ubuntu.go:206] setting minikube options for container-runtime
	I0908 11:33:41.227740  619897 config.go:182] Loaded profile config "addons-960652": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 11:33:41.227864  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:33:41.246451  619897 main.go:141] libmachine: Using SSH client type: native
	I0908 11:33:41.246682  619897 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0908 11:33:41.246702  619897 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0908 11:33:41.464439  619897 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0908 11:33:41.464474  619897 machine.go:96] duration metric: took 1.113860455s to provisionDockerMachine
	I0908 11:33:41.464488  619897 client.go:171] duration metric: took 14.352856449s to LocalClient.Create
	I0908 11:33:41.464516  619897 start.go:167] duration metric: took 14.352939885s to libmachine.API.Create "addons-960652"
	I0908 11:33:41.464532  619897 start.go:293] postStartSetup for "addons-960652" (driver="docker")
	I0908 11:33:41.464552  619897 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 11:33:41.464646  619897 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 11:33:41.464720  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:33:41.483833  619897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa Username:docker}
	I0908 11:33:41.577389  619897 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 11:33:41.580702  619897 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0908 11:33:41.580728  619897 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0908 11:33:41.580735  619897 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0908 11:33:41.580742  619897 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0908 11:33:41.580753  619897 filesync.go:126] Scanning /home/jenkins/minikube-integration/21512-614854/.minikube/addons for local assets ...
	I0908 11:33:41.580822  619897 filesync.go:126] Scanning /home/jenkins/minikube-integration/21512-614854/.minikube/files for local assets ...
	I0908 11:33:41.580848  619897 start.go:296] duration metric: took 116.304547ms for postStartSetup
	I0908 11:33:41.581160  619897 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-960652
	I0908 11:33:41.599119  619897 profile.go:143] Saving config to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/config.json ...
	I0908 11:33:41.599384  619897 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 11:33:41.599425  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:33:41.616938  619897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa Username:docker}
	I0908 11:33:41.700946  619897 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0908 11:33:41.705474  619897 start.go:128] duration metric: took 14.596174719s to createHost
	I0908 11:33:41.705503  619897 start.go:83] releasing machines lock for "addons-960652", held for 14.596339054s
	I0908 11:33:41.705580  619897 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-960652
	I0908 11:33:41.723185  619897 ssh_runner.go:195] Run: cat /version.json
	I0908 11:33:41.723254  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:33:41.723300  619897 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 11:33:41.723385  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:33:41.741595  619897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa Username:docker}
	I0908 11:33:41.741883  619897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa Username:docker}
	I0908 11:33:41.899870  619897 ssh_runner.go:195] Run: systemctl --version
	I0908 11:33:41.904519  619897 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0908 11:33:42.046562  619897 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0908 11:33:42.053144  619897 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 11:33:42.072524  619897 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0908 11:33:42.072628  619897 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 11:33:42.101638  619897 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0908 11:33:42.101663  619897 start.go:495] detecting cgroup driver to use...
	I0908 11:33:42.101700  619897 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0908 11:33:42.101747  619897 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 11:33:42.118167  619897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 11:33:42.129770  619897 docker.go:218] disabling cri-docker service (if available) ...
	I0908 11:33:42.129825  619897 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 11:33:42.143544  619897 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 11:33:42.158241  619897 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 11:33:42.244536  619897 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 11:33:42.331904  619897 docker.go:234] disabling docker service ...
	I0908 11:33:42.331962  619897 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 11:33:42.352483  619897 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 11:33:42.364776  619897 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 11:33:42.444084  619897 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 11:33:42.532173  619897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 11:33:42.543862  619897 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 11:33:42.561072  619897 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0908 11:33:42.561140  619897 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:33:42.571592  619897 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0908 11:33:42.571689  619897 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:33:42.584397  619897 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:33:42.595120  619897 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:33:42.605544  619897 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 11:33:42.615152  619897 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:33:42.625278  619897 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:33:42.641872  619897 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:33:42.652113  619897 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 11:33:42.660725  619897 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 11:33:42.669680  619897 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:33:42.750148  619897 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0908 11:33:42.864131  619897 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0908 11:33:42.864225  619897 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0908 11:33:42.868259  619897 start.go:563] Will wait 60s for crictl version
	I0908 11:33:42.868333  619897 ssh_runner.go:195] Run: which crictl
	I0908 11:33:42.872381  619897 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 11:33:42.911298  619897 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0908 11:33:42.911418  619897 ssh_runner.go:195] Run: crio --version
	I0908 11:33:42.951463  619897 ssh_runner.go:195] Run: crio --version
	I0908 11:33:42.993113  619897 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0908 11:33:42.994407  619897 cli_runner.go:164] Run: docker network inspect addons-960652 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 11:33:43.013073  619897 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0908 11:33:43.017560  619897 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 11:33:43.030202  619897 kubeadm.go:875] updating cluster {Name:addons-960652 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-960652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 11:33:43.030319  619897 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 11:33:43.030364  619897 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 11:33:43.101136  619897 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 11:33:43.101160  619897 crio.go:433] Images already preloaded, skipping extraction
	I0908 11:33:43.101209  619897 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 11:33:43.137200  619897 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 11:33:43.137230  619897 cache_images.go:85] Images are preloaded, skipping loading
	I0908 11:33:43.137239  619897 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0908 11:33:43.137347  619897 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-960652 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-960652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 11:33:43.137413  619897 ssh_runner.go:195] Run: crio config
	I0908 11:33:43.182531  619897 cni.go:84] Creating CNI manager for ""
	I0908 11:33:43.182569  619897 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 11:33:43.182583  619897 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 11:33:43.182618  619897 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-960652 NodeName:addons-960652 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 11:33:43.182786  619897 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-960652"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 11:33:43.182864  619897 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 11:33:43.192022  619897 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 11:33:43.192099  619897 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 11:33:43.201596  619897 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0908 11:33:43.219879  619897 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 11:33:43.238069  619897 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I0908 11:33:43.256801  619897 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0908 11:33:43.260731  619897 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 11:33:43.272448  619897 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:33:43.348010  619897 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 11:33:43.362120  619897 certs.go:68] Setting up /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652 for IP: 192.168.49.2
	I0908 11:33:43.362147  619897 certs.go:194] generating shared ca certs ...
	I0908 11:33:43.362167  619897 certs.go:226] acquiring lock for ca certs: {Name:mkfa58237bde04d2b800b1fdade18ddbc226533f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:33:43.362309  619897 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21512-614854/.minikube/ca.key
	I0908 11:33:43.440168  619897 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21512-614854/.minikube/ca.crt ...
	I0908 11:33:43.440206  619897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/.minikube/ca.crt: {Name:mk7d80f7a404aff80aeaffcfc4edffccdfeb7dbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:33:43.440392  619897 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21512-614854/.minikube/ca.key ...
	I0908 11:33:43.440405  619897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/.minikube/ca.key: {Name:mkab08724aeb68516406bd46f7ec1f74215962cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:33:43.440487  619897 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21512-614854/.minikube/proxy-client-ca.key
	I0908 11:33:44.023691  619897 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21512-614854/.minikube/proxy-client-ca.crt ...
	I0908 11:33:44.023726  619897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/.minikube/proxy-client-ca.crt: {Name:mk03660a868fb9422d263878f84ec4cde0130a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:33:44.023904  619897 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21512-614854/.minikube/proxy-client-ca.key ...
	I0908 11:33:44.023915  619897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/.minikube/proxy-client-ca.key: {Name:mkf24004a21bb9937639c2d6fa8c74d200b76207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:33:44.023987  619897 certs.go:256] generating profile certs ...
	I0908 11:33:44.024051  619897 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/client.key
	I0908 11:33:44.024071  619897 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/client.crt with IP's: []
	I0908 11:33:44.154900  619897 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/client.crt ...
	I0908 11:33:44.154940  619897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/client.crt: {Name:mk67662acead9d252eb7928a0dc11c0c1f2c005f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:33:44.155124  619897 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/client.key ...
	I0908 11:33:44.155136  619897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/client.key: {Name:mk8d2739d1405a7f36688c270312154dc92c57bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:33:44.155208  619897 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/apiserver.key.a861e59b
	I0908 11:33:44.155227  619897 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/apiserver.crt.a861e59b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0908 11:33:44.298597  619897 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/apiserver.crt.a861e59b ...
	I0908 11:33:44.298638  619897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/apiserver.crt.a861e59b: {Name:mke9f4c212d3a4b584e6eb01f969fdf642fa3e0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:33:44.298810  619897 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/apiserver.key.a861e59b ...
	I0908 11:33:44.298832  619897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/apiserver.key.a861e59b: {Name:mkec19ac8aebaaff0a652a609af11dad1edf4727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:33:44.298898  619897 certs.go:381] copying /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/apiserver.crt.a861e59b -> /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/apiserver.crt
	I0908 11:33:44.298977  619897 certs.go:385] copying /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/apiserver.key.a861e59b -> /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/apiserver.key
	I0908 11:33:44.299024  619897 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/proxy-client.key
	I0908 11:33:44.299040  619897 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/proxy-client.crt with IP's: []
	I0908 11:33:44.855211  619897 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/proxy-client.crt ...
	I0908 11:33:44.855252  619897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/proxy-client.crt: {Name:mkc5aebb59908b46f397bbf30d93767d827141d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:33:44.855484  619897 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/proxy-client.key ...
	I0908 11:33:44.855504  619897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/proxy-client.key: {Name:mkc6d25fff608d09c2ed36e59950b7baef9b05b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:33:44.855754  619897 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca-key.pem (1675 bytes)
	I0908 11:33:44.855801  619897 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem (1082 bytes)
	I0908 11:33:44.855840  619897 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/cert.pem (1123 bytes)
	I0908 11:33:44.855872  619897 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/key.pem (1675 bytes)
	I0908 11:33:44.856480  619897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 11:33:44.883262  619897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0908 11:33:44.908769  619897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 11:33:44.932969  619897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0908 11:33:44.957563  619897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0908 11:33:44.982793  619897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0908 11:33:45.007259  619897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 11:33:45.031085  619897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0908 11:33:45.054914  619897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 11:33:45.078362  619897 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 11:33:45.096451  619897 ssh_runner.go:195] Run: openssl version
	I0908 11:33:45.101900  619897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 11:33:45.111291  619897 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:33:45.114741  619897 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 11:33 /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:33:45.114792  619897 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:33:45.121328  619897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 11:33:45.131152  619897 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 11:33:45.135058  619897 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0908 11:33:45.135114  619897 kubeadm.go:392] StartCluster: {Name:addons-960652 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-960652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 11:33:45.135195  619897 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0908 11:33:45.135249  619897 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 11:33:45.172742  619897 cri.go:89] found id: ""
	I0908 11:33:45.172817  619897 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 11:33:45.181786  619897 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0908 11:33:45.190780  619897 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0908 11:33:45.190843  619897 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0908 11:33:45.200921  619897 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0908 11:33:45.200943  619897 kubeadm.go:157] found existing configuration files:
	
	I0908 11:33:45.200997  619897 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0908 11:33:45.209848  619897 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0908 11:33:45.209907  619897 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0908 11:33:45.218481  619897 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0908 11:33:45.227090  619897 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0908 11:33:45.227154  619897 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0908 11:33:45.235619  619897 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0908 11:33:45.244180  619897 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0908 11:33:45.244247  619897 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0908 11:33:45.252892  619897 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0908 11:33:45.261549  619897 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0908 11:33:45.261601  619897 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0908 11:33:45.270096  619897 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0908 11:33:45.324755  619897 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0908 11:33:45.325032  619897 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
	I0908 11:33:45.379226  619897 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0908 11:33:56.410930  619897 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0908 11:33:56.411022  619897 kubeadm.go:310] [preflight] Running pre-flight checks
	I0908 11:33:56.411141  619897 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0908 11:33:56.411238  619897 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1083-gcp
	I0908 11:33:56.411297  619897 kubeadm.go:310] OS: Linux
	I0908 11:33:56.411365  619897 kubeadm.go:310] CGROUPS_CPU: enabled
	I0908 11:33:56.411450  619897 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0908 11:33:56.411524  619897 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0908 11:33:56.411601  619897 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0908 11:33:56.411697  619897 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0908 11:33:56.411772  619897 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0908 11:33:56.411839  619897 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0908 11:33:56.411909  619897 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0908 11:33:56.411988  619897 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0908 11:33:56.412097  619897 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0908 11:33:56.412263  619897 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0908 11:33:56.412399  619897 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0908 11:33:56.412496  619897 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0908 11:33:56.414508  619897 out.go:252]   - Generating certificates and keys ...
	I0908 11:33:56.414666  619897 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0908 11:33:56.414773  619897 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0908 11:33:56.414875  619897 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0908 11:33:56.414962  619897 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0908 11:33:56.415058  619897 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0908 11:33:56.415138  619897 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0908 11:33:56.415224  619897 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0908 11:33:56.415360  619897 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-960652 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0908 11:33:56.415431  619897 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0908 11:33:56.415538  619897 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-960652 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0908 11:33:56.415594  619897 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0908 11:33:56.415685  619897 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0908 11:33:56.415726  619897 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0908 11:33:56.415801  619897 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0908 11:33:56.415856  619897 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0908 11:33:56.415914  619897 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0908 11:33:56.415962  619897 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0908 11:33:56.416018  619897 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0908 11:33:56.416076  619897 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0908 11:33:56.416149  619897 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0908 11:33:56.416226  619897 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0908 11:33:56.417483  619897 out.go:252]   - Booting up control plane ...
	I0908 11:33:56.417581  619897 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0908 11:33:56.417657  619897 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0908 11:33:56.417736  619897 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0908 11:33:56.417862  619897 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0908 11:33:56.417986  619897 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0908 11:33:56.418092  619897 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0908 11:33:56.418170  619897 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0908 11:33:56.418245  619897 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0908 11:33:56.418375  619897 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0908 11:33:56.418471  619897 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0908 11:33:56.418539  619897 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.995188ms
	I0908 11:33:56.418669  619897 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0908 11:33:56.418785  619897 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0908 11:33:56.418910  619897 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0908 11:33:56.419023  619897 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0908 11:33:56.419087  619897 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 3.110331441s
	I0908 11:33:56.419153  619897 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.682121444s
	I0908 11:33:56.419207  619897 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 5.501384741s
	I0908 11:33:56.419341  619897 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0908 11:33:56.419443  619897 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0908 11:33:56.419602  619897 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0908 11:33:56.419871  619897 kubeadm.go:310] [mark-control-plane] Marking the node addons-960652 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0908 11:33:56.419952  619897 kubeadm.go:310] [bootstrap-token] Using token: 4fcisp.lymz102kws8rtzux
	I0908 11:33:56.421327  619897 out.go:252]   - Configuring RBAC rules ...
	I0908 11:33:56.421462  619897 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0908 11:33:56.421550  619897 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0908 11:33:56.421694  619897 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0908 11:33:56.421828  619897 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0908 11:33:56.421961  619897 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0908 11:33:56.422085  619897 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0908 11:33:56.422242  619897 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0908 11:33:56.422334  619897 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0908 11:33:56.422407  619897 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0908 11:33:56.422415  619897 kubeadm.go:310] 
	I0908 11:33:56.422462  619897 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0908 11:33:56.422467  619897 kubeadm.go:310] 
	I0908 11:33:56.422574  619897 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0908 11:33:56.422599  619897 kubeadm.go:310] 
	I0908 11:33:56.422645  619897 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0908 11:33:56.422706  619897 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0908 11:33:56.422749  619897 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0908 11:33:56.422755  619897 kubeadm.go:310] 
	I0908 11:33:56.422796  619897 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0908 11:33:56.422810  619897 kubeadm.go:310] 
	I0908 11:33:56.422848  619897 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0908 11:33:56.422854  619897 kubeadm.go:310] 
	I0908 11:33:56.422894  619897 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0908 11:33:56.422955  619897 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0908 11:33:56.423014  619897 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0908 11:33:56.423041  619897 kubeadm.go:310] 
	I0908 11:33:56.423155  619897 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0908 11:33:56.423267  619897 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0908 11:33:56.423280  619897 kubeadm.go:310] 
	I0908 11:33:56.423390  619897 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4fcisp.lymz102kws8rtzux \
	I0908 11:33:56.423542  619897 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7637fdf591f5caebb87eda0c466f59e2329c3b4b8f568d0c2721bf3a0b62aaf9 \
	I0908 11:33:56.423576  619897 kubeadm.go:310] 	--control-plane 
	I0908 11:33:56.423590  619897 kubeadm.go:310] 
	I0908 11:33:56.423724  619897 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0908 11:33:56.423733  619897 kubeadm.go:310] 
	I0908 11:33:56.423839  619897 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4fcisp.lymz102kws8rtzux \
	I0908 11:33:56.423988  619897 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7637fdf591f5caebb87eda0c466f59e2329c3b4b8f568d0c2721bf3a0b62aaf9 
	I0908 11:33:56.424015  619897 cni.go:84] Creating CNI manager for ""
	I0908 11:33:56.424025  619897 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 11:33:56.425557  619897 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0908 11:33:56.426820  619897 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0908 11:33:56.431301  619897 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0908 11:33:56.431326  619897 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0908 11:33:56.450462  619897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0908 11:33:56.670998  619897 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0908 11:33:56.671086  619897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 11:33:56.671123  619897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-960652 minikube.k8s.io/updated_at=2025_09_08T11_33_56_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=a399eb27affc71ce2737faeeac659fc2ce938c64 minikube.k8s.io/name=addons-960652 minikube.k8s.io/primary=true
	I0908 11:33:56.678870  619897 ops.go:34] apiserver oom_adj: -16
	I0908 11:33:56.885029  619897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 11:33:57.385360  619897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 11:33:57.885679  619897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 11:33:58.385872  619897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 11:33:58.885540  619897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 11:33:59.385878  619897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 11:33:59.885163  619897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 11:34:00.385274  619897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 11:34:00.885800  619897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 11:34:00.953407  619897 kubeadm.go:1105] duration metric: took 4.282393594s to wait for elevateKubeSystemPrivileges
	I0908 11:34:00.953442  619897 kubeadm.go:394] duration metric: took 15.818333544s to StartCluster
	I0908 11:34:00.953468  619897 settings.go:142] acquiring lock: {Name:mk9e6c1dbd6be0735fc1b1285e1ee30836d4ceb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:34:00.953609  619897 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21512-614854/kubeconfig
	I0908 11:34:00.954090  619897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/kubeconfig: {Name:mkdc8e576f612d76ffebaac15a9174cc9dea2917 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:34:00.954320  619897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0908 11:34:00.954343  619897 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 11:34:00.954425  619897 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0908 11:34:00.954539  619897 config.go:182] Loaded profile config "addons-960652": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 11:34:00.954569  619897 addons.go:69] Setting yakd=true in profile "addons-960652"
	I0908 11:34:00.954583  619897 addons.go:69] Setting cloud-spanner=true in profile "addons-960652"
	I0908 11:34:00.954592  619897 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-960652"
	I0908 11:34:00.954604  619897 addons.go:69] Setting registry=true in profile "addons-960652"
	I0908 11:34:00.954614  619897 addons.go:69] Setting default-storageclass=true in profile "addons-960652"
	I0908 11:34:00.954617  619897 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-960652"
	I0908 11:34:00.954622  619897 addons.go:238] Setting addon registry=true in "addons-960652"
	I0908 11:34:00.954617  619897 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-960652"
	I0908 11:34:00.954640  619897 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-960652"
	I0908 11:34:00.954665  619897 addons.go:69] Setting ingress-dns=true in profile "addons-960652"
	I0908 11:34:00.954678  619897 addons.go:69] Setting volumesnapshots=true in profile "addons-960652"
	I0908 11:34:00.954680  619897 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-960652"
	I0908 11:34:00.954690  619897 addons.go:69] Setting registry-creds=true in profile "addons-960652"
	I0908 11:34:00.954700  619897 addons.go:69] Setting metrics-server=true in profile "addons-960652"
	I0908 11:34:00.954701  619897 addons.go:69] Setting storage-provisioner=true in profile "addons-960652"
	I0908 11:34:00.954705  619897 addons.go:238] Setting addon registry-creds=true in "addons-960652"
	I0908 11:34:00.954712  619897 addons.go:238] Setting addon storage-provisioner=true in "addons-960652"
	I0908 11:34:00.954628  619897 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-960652"
	I0908 11:34:00.954731  619897 host.go:66] Checking if "addons-960652" exists ...
	I0908 11:34:00.954737  619897 host.go:66] Checking if "addons-960652" exists ...
	I0908 11:34:00.954606  619897 addons.go:69] Setting ingress=true in profile "addons-960652"
	I0908 11:34:00.954713  619897 addons.go:238] Setting addon metrics-server=true in "addons-960652"
	I0908 11:34:00.954756  619897 addons.go:238] Setting addon ingress=true in "addons-960652"
	I0908 11:34:00.954764  619897 host.go:66] Checking if "addons-960652" exists ...
	I0908 11:34:00.954787  619897 host.go:66] Checking if "addons-960652" exists ...
	I0908 11:34:00.954691  619897 addons.go:69] Setting inspektor-gadget=true in profile "addons-960652"
	I0908 11:34:00.955110  619897 addons.go:238] Setting addon inspektor-gadget=true in "addons-960652"
	I0908 11:34:00.955164  619897 host.go:66] Checking if "addons-960652" exists ...
	I0908 11:34:00.954664  619897 addons.go:69] Setting volcano=true in profile "addons-960652"
	I0908 11:34:00.955281  619897 addons.go:238] Setting addon volcano=true in "addons-960652"
	I0908 11:34:00.955300  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	I0908 11:34:00.955337  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	I0908 11:34:00.955366  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	I0908 11:34:00.954596  619897 addons.go:238] Setting addon yakd=true in "addons-960652"
	I0908 11:34:00.955417  619897 host.go:66] Checking if "addons-960652" exists ...
	I0908 11:34:00.955700  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	I0908 11:34:00.955817  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	I0908 11:34:00.955860  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	I0908 11:34:00.955311  619897 host.go:66] Checking if "addons-960652" exists ...
	I0908 11:34:00.956648  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	I0908 11:34:00.955166  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	I0908 11:34:00.954654  619897 host.go:66] Checking if "addons-960652" exists ...
	I0908 11:34:00.954678  619897 addons.go:238] Setting addon ingress-dns=true in "addons-960652"
	I0908 11:34:00.957896  619897 host.go:66] Checking if "addons-960652" exists ...
	I0908 11:34:00.954569  619897 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-960652"
	I0908 11:34:00.958069  619897 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-960652"
	I0908 11:34:00.958102  619897 host.go:66] Checking if "addons-960652" exists ...
	I0908 11:34:00.958396  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	I0908 11:34:00.958588  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	I0908 11:34:00.954596  619897 addons.go:238] Setting addon cloud-spanner=true in "addons-960652"
	I0908 11:34:00.958764  619897 host.go:66] Checking if "addons-960652" exists ...
	I0908 11:34:00.954636  619897 addons.go:69] Setting gcp-auth=true in profile "addons-960652"
	I0908 11:34:00.960051  619897 mustload.go:65] Loading cluster: addons-960652
	I0908 11:34:00.954655  619897 host.go:66] Checking if "addons-960652" exists ...
	I0908 11:34:00.954677  619897 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-960652"
	I0908 11:34:00.964673  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	I0908 11:34:00.954692  619897 addons.go:238] Setting addon volumesnapshots=true in "addons-960652"
	I0908 11:34:00.968383  619897 host.go:66] Checking if "addons-960652" exists ...
	I0908 11:34:00.954741  619897 host.go:66] Checking if "addons-960652" exists ...
	I0908 11:34:00.969613  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	I0908 11:34:00.969658  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	I0908 11:34:00.957520  619897 out.go:179] * Verifying Kubernetes components...
	I0908 11:34:00.973609  619897 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:34:00.991981  619897 config.go:182] Loaded profile config "addons-960652": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 11:34:00.992110  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	W0908 11:34:00.992223  619897 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0908 11:34:00.992276  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	I0908 11:34:00.992286  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	I0908 11:34:00.995889  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	I0908 11:34:01.001216  619897 addons.go:238] Setting addon default-storageclass=true in "addons-960652"
	I0908 11:34:01.001272  619897 host.go:66] Checking if "addons-960652" exists ...
	I0908 11:34:01.001702  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	I0908 11:34:01.004620  619897 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.0
	I0908 11:34:01.007043  619897 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0908 11:34:01.007215  619897 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0908 11:34:01.007232  619897 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0908 11:34:01.007316  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:34:01.008153  619897 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0908 11:34:01.008174  619897 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0908 11:34:01.008233  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:34:01.010295  619897 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0908 11:34:01.011469  619897 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 11:34:01.011504  619897 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0908 11:34:01.011523  619897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0908 11:34:01.011593  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:34:01.013040  619897 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 11:34:01.013063  619897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 11:34:01.013122  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:34:01.020732  619897 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0908 11:34:01.024740  619897 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0908 11:34:01.025935  619897 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0908 11:34:01.027387  619897 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0908 11:34:01.027412  619897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0908 11:34:01.027485  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:34:01.040551  619897 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0908 11:34:01.041439  619897 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-960652"
	I0908 11:34:01.041494  619897 host.go:66] Checking if "addons-960652" exists ...
	I0908 11:34:01.041564  619897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa Username:docker}
	I0908 11:34:01.041798  619897 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0908 11:34:01.041821  619897 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0908 11:34:01.041878  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:34:01.041937  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	I0908 11:34:01.055155  619897 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0908 11:34:01.058318  619897 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0908 11:34:01.058347  619897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0908 11:34:01.058424  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:34:01.060891  619897 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0908 11:34:01.061941  619897 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0908 11:34:01.063022  619897 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0908 11:34:01.063983  619897 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0908 11:34:01.065118  619897 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0908 11:34:01.066103  619897 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0908 11:34:01.067173  619897 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0908 11:34:01.068242  619897 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0908 11:34:01.070413  619897 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0908 11:34:01.070436  619897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0908 11:34:01.070505  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:34:01.070931  619897 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0908 11:34:01.071507  619897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa Username:docker}
	I0908 11:34:01.072445  619897 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0908 11:34:01.072475  619897 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0908 11:34:01.072555  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:34:01.087297  619897 host.go:66] Checking if "addons-960652" exists ...
	I0908 11:34:01.088477  619897 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0908 11:34:01.088581  619897 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0908 11:34:01.090812  619897 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0908 11:34:01.090838  619897 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0908 11:34:01.090910  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:34:01.091155  619897 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0908 11:34:01.093241  619897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa Username:docker}
	I0908 11:34:01.095785  619897 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0908 11:34:01.095815  619897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0908 11:34:01.095878  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:34:01.098375  619897 out.go:179]   - Using image docker.io/registry:3.0.0
	I0908 11:34:01.099557  619897 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0908 11:34:01.099586  619897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0908 11:34:01.099668  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:34:01.105077  619897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa Username:docker}
	I0908 11:34:01.112113  619897 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 11:34:01.112142  619897 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 11:34:01.112205  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:34:01.113635  619897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa Username:docker}
	I0908 11:34:01.117771  619897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa Username:docker}
	I0908 11:34:01.125950  619897 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0908 11:34:01.132793  619897 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0908 11:34:01.132823  619897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0908 11:34:01.132899  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:34:01.137490  619897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa Username:docker}
	I0908 11:34:01.139597  619897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa Username:docker}
	I0908 11:34:01.141421  619897 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0908 11:34:01.141453  619897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa Username:docker}
	I0908 11:34:01.145469  619897 out.go:179]   - Using image docker.io/busybox:stable
	I0908 11:34:01.145473  619897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa Username:docker}
	I0908 11:34:01.146880  619897 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0908 11:34:01.146903  619897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0908 11:34:01.146963  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:34:01.151091  619897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa Username:docker}
	I0908 11:34:01.154696  619897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa Username:docker}
	I0908 11:34:01.157280  619897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa Username:docker}
	I0908 11:34:01.163220  619897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa Username:docker}
	I0908 11:34:01.167060  619897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa Username:docker}
	W0908 11:34:01.182587  619897 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0908 11:34:01.182635  619897 retry.go:31] will retry after 169.961046ms: ssh: handshake failed: EOF
	W0908 11:34:01.182587  619897 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0908 11:34:01.182656  619897 retry.go:31] will retry after 211.29249ms: ssh: handshake failed: EOF
	I0908 11:34:01.289199  619897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0908 11:34:01.380080  619897 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 11:34:01.490045  619897 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0908 11:34:01.490152  619897 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0908 11:34:01.491928  619897 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0908 11:34:01.491966  619897 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0908 11:34:01.493756  619897 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0908 11:34:01.493784  619897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0908 11:34:01.578489  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 11:34:01.596098  619897 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0908 11:34:01.596202  619897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0908 11:34:01.680901  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0908 11:34:01.693003  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 11:34:01.776251  619897 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0908 11:34:01.776359  619897 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0908 11:34:01.777684  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0908 11:34:01.778223  619897 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0908 11:34:01.778278  619897 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0908 11:34:01.789500  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0908 11:34:01.877454  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 11:34:01.879375  619897 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0908 11:34:01.879454  619897 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0908 11:34:01.883571  619897 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0908 11:34:01.883663  619897 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0908 11:34:01.884319  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0908 11:34:01.894207  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0908 11:34:01.993144  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0908 11:34:02.077317  619897 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0908 11:34:02.077417  619897 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0908 11:34:02.084042  619897 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0908 11:34:02.084165  619897 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0908 11:34:02.091599  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0908 11:34:02.097190  619897 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0908 11:34:02.097287  619897 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0908 11:34:02.281278  619897 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0908 11:34:02.281394  619897 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0908 11:34:02.297371  619897 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0908 11:34:02.297487  619897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0908 11:34:02.377371  619897 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0908 11:34:02.377409  619897 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0908 11:34:02.494447  619897 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 11:34:02.494584  619897 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0908 11:34:02.577905  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0908 11:34:02.589092  619897 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0908 11:34:02.589199  619897 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0908 11:34:02.680976  619897 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0908 11:34:02.681084  619897 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0908 11:34:02.781293  619897 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0908 11:34:02.781403  619897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0908 11:34:02.790672  619897 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0908 11:34:02.790712  619897 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0908 11:34:03.378788  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 11:34:03.393269  619897 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0908 11:34:03.393308  619897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0908 11:34:03.398047  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0908 11:34:03.487000  619897 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0908 11:34:03.487118  619897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0908 11:34:03.687784  619897 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0908 11:34:03.687887  619897 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0908 11:34:03.776442  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0908 11:34:03.795307  619897 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.415060036s)
	I0908 11:34:03.796293  619897 node_ready.go:35] waiting up to 6m0s for node "addons-960652" to be "Ready" ...
	I0908 11:34:03.796640  619897 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.507390317s)
	I0908 11:34:03.796693  619897 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0908 11:34:04.376688  619897 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0908 11:34:04.376817  619897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0908 11:34:04.887162  619897 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0908 11:34:04.887278  619897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0908 11:34:05.101951  619897 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-960652" context rescaled to 1 replicas
	I0908 11:34:05.277675  619897 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0908 11:34:05.277778  619897 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0908 11:34:05.385881  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W0908 11:34:05.977850  619897 node_ready.go:57] node "addons-960652" has "Ready":"False" status (will retry)
	I0908 11:34:06.391275  619897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.710270254s)
	I0908 11:34:06.391543  619897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.813015231s)
	I0908 11:34:06.677695  619897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.984498727s)
	W0908 11:34:06.677790  619897 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:34:06.677820  619897 retry.go:31] will retry after 170.880749ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:34:06.849242  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 11:34:07.601838  619897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.812297836s)
	I0908 11:34:07.601935  619897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.724377166s)
	I0908 11:34:07.601958  619897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.717577763s)
	I0908 11:34:07.601979  619897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.707685157s)
	I0908 11:34:07.602024  619897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.608779058s)
	I0908 11:34:07.602051  619897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.510342542s)
	I0908 11:34:07.602082  619897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.024077908s)
	I0908 11:34:07.602859  619897 addons.go:479] Verifying addon registry=true in "addons-960652"
	I0908 11:34:07.602132  619897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.223238413s)
	I0908 11:34:07.603167  619897 addons.go:479] Verifying addon metrics-server=true in "addons-960652"
	I0908 11:34:07.602173  619897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.20409235s)
	I0908 11:34:07.603356  619897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.825596652s)
	I0908 11:34:07.603415  619897 addons.go:479] Verifying addon ingress=true in "addons-960652"
	I0908 11:34:07.604567  619897 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-960652 service yakd-dashboard -n yakd-dashboard
	
	I0908 11:34:07.604594  619897 out.go:179] * Verifying registry addon...
	I0908 11:34:07.605424  619897 out.go:179] * Verifying ingress addon...
	I0908 11:34:07.606784  619897 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0908 11:34:07.607292  619897 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0908 11:34:07.679628  619897 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0908 11:34:07.679741  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:07.679810  619897 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0908 11:34:07.679834  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 11:34:07.684362  619897 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0908 11:34:08.110720  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:08.110766  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 11:34:08.299834  619897 node_ready.go:57] node "addons-960652" has "Ready":"False" status (will retry)
	I0908 11:34:08.610717  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:08.610931  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:08.696797  619897 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0908 11:34:08.696992  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:34:08.720842  619897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa Username:docker}
	I0908 11:34:08.977855  619897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.201353915s)
	W0908 11:34:08.977907  619897 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0908 11:34:08.977932  619897 retry.go:31] will retry after 216.72198ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0908 11:34:08.978171  619897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.592166773s)
	I0908 11:34:08.978211  619897 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-960652"
	I0908 11:34:08.980027  619897 out.go:179] * Verifying csi-hostpath-driver addon...
	I0908 11:34:08.982448  619897 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0908 11:34:08.986052  619897 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0908 11:34:08.986078  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:09.002683  619897 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0908 11:34:09.085374  619897 addons.go:238] Setting addon gcp-auth=true in "addons-960652"
	I0908 11:34:09.085448  619897 host.go:66] Checking if "addons-960652" exists ...
	I0908 11:34:09.085840  619897 cli_runner.go:164] Run: docker container inspect addons-960652 --format={{.State.Status}}
	I0908 11:34:09.106987  619897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.257700445s)
	W0908 11:34:09.107023  619897 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:34:09.107046  619897 retry.go:31] will retry after 329.343963ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:34:09.107885  619897 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0908 11:34:09.107943  619897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-960652
	I0908 11:34:09.111569  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:09.111698  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:09.126635  619897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/addons-960652/id_rsa Username:docker}
	I0908 11:34:09.195230  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0908 11:34:09.436982  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 11:34:09.485960  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:09.611581  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:09.611858  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:09.986323  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:10.110577  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:10.110861  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 11:34:10.300065  619897 node_ready.go:57] node "addons-960652" has "Ready":"False" status (will retry)
	I0908 11:34:10.486318  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:10.610696  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:10.610928  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:10.986489  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:11.111223  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:11.111278  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:11.486808  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:11.610664  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:11.610884  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:11.730439  619897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.535158491s)
	I0908 11:34:11.730504  619897 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.622585723s)
	I0908 11:34:11.730548  619897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.293512363s)
	W0908 11:34:11.730573  619897 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:34:11.730591  619897 retry.go:31] will retry after 623.173809ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:34:11.732880  619897 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0908 11:34:11.734287  619897 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0908 11:34:11.735614  619897 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0908 11:34:11.735633  619897 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0908 11:34:11.754344  619897 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0908 11:34:11.754384  619897 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0908 11:34:11.771820  619897 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0908 11:34:11.771844  619897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0908 11:34:11.789886  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0908 11:34:11.986218  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:12.114087  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:12.114792  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:12.177449  619897 addons.go:479] Verifying addon gcp-auth=true in "addons-960652"
	I0908 11:34:12.179101  619897 out.go:179] * Verifying gcp-auth addon...
	I0908 11:34:12.181827  619897 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0908 11:34:12.184308  619897 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0908 11:34:12.184328  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 11:34:12.300201  619897 node_ready.go:57] node "addons-960652" has "Ready":"False" status (will retry)
	I0908 11:34:12.354464  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 11:34:12.486775  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:12.610932  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:12.611042  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:12.686063  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 11:34:12.925012  619897 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:34:12.925054  619897 retry.go:31] will retry after 909.363968ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:34:12.986352  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:13.110617  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:13.110672  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:13.185926  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:13.486312  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:13.611492  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:13.611524  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:13.685558  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:13.835593  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 11:34:13.986175  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:14.111927  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:14.111940  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:14.185623  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 11:34:14.300499  619897 node_ready.go:57] node "addons-960652" has "Ready":"False" status (will retry)
	W0908 11:34:14.401824  619897 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:34:14.401860  619897 retry.go:31] will retry after 1.294572327s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:34:14.486041  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:14.611005  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:14.611161  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:14.712056  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:14.986095  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:15.111204  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:15.111250  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:15.185311  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:15.486949  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:15.611340  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:15.611559  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:15.685430  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:15.697495  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 11:34:15.985523  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:16.110404  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:16.110605  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:16.185013  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 11:34:16.272316  619897 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:34:16.272376  619897 retry.go:31] will retry after 961.705756ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:34:16.486269  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:16.611337  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:16.611417  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:16.685404  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 11:34:16.799305  619897 node_ready.go:57] node "addons-960652" has "Ready":"False" status (will retry)
	I0908 11:34:16.987101  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:17.111483  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:17.111641  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:17.185931  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:17.235082  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 11:34:17.486476  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:17.611237  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:17.611298  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:17.685790  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 11:34:17.804873  619897 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:34:17.804910  619897 retry.go:31] will retry after 1.762445108s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:34:17.986287  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:18.111452  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:18.111709  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:18.185357  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:18.485627  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:18.610723  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:18.610836  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:18.686171  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 11:34:18.800435  619897 node_ready.go:57] node "addons-960652" has "Ready":"False" status (will retry)
	I0908 11:34:18.986064  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:19.111432  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:19.111530  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:19.185273  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:19.486062  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:19.568258  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 11:34:19.610773  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:19.610875  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:19.685741  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:19.986575  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:20.111569  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:20.111601  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0908 11:34:20.132104  619897 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:34:20.132145  619897 retry.go:31] will retry after 2.782976601s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:34:20.185407  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:20.486018  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:20.611345  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:20.611437  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:20.685641  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:20.986720  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:21.111429  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:21.111533  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:21.185116  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 11:34:21.300492  619897 node_ready.go:57] node "addons-960652" has "Ready":"False" status (will retry)
	I0908 11:34:21.486462  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:21.610480  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:21.610540  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:21.685439  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:21.986341  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:22.110520  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:22.110849  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:22.185765  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:22.485893  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:22.611062  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:22.611210  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:22.685239  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:22.915600  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 11:34:22.986821  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:23.111570  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:23.111637  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:23.186170  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:23.486236  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0908 11:34:23.491554  619897 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:34:23.491590  619897 retry.go:31] will retry after 6.078040333s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:34:23.610917  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:23.611023  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:23.686076  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 11:34:23.799831  619897 node_ready.go:57] node "addons-960652" has "Ready":"False" status (will retry)
	I0908 11:34:23.985764  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:24.110755  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:24.110959  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:24.184972  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:24.486878  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:24.610831  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:24.610972  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:24.685261  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:24.986798  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:25.110788  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:25.110893  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:25.184999  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:25.486267  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:25.610798  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:25.610983  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:25.686122  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 11:34:25.800114  619897 node_ready.go:57] node "addons-960652" has "Ready":"False" status (will retry)
	I0908 11:34:25.986443  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:26.110745  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:26.110881  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:26.184883  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:26.485853  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:26.611176  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:26.611223  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:26.686927  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:26.986367  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:27.110553  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:27.110737  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:27.185734  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:27.486180  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:27.611719  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:27.611937  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:27.686249  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 11:34:27.800422  619897 node_ready.go:57] node "addons-960652" has "Ready":"False" status (will retry)
	I0908 11:34:27.985724  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:28.111006  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:28.111081  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:28.184853  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:28.486071  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:28.611302  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:28.611529  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:28.685690  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:28.986581  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:29.110671  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:29.110792  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:29.185861  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:29.486568  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:29.570751  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 11:34:29.611366  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:29.611539  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:29.685331  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 11:34:29.800681  619897 node_ready.go:57] node "addons-960652" has "Ready":"False" status (will retry)
	I0908 11:34:29.985950  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:30.110758  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:30.110799  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 11:34:30.168721  619897 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:34:30.168760  619897 retry.go:31] will retry after 10.429694039s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:34:30.186034  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:30.485669  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:30.611259  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:30.611525  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:30.685512  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:30.986025  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:31.111433  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:31.111613  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:31.185521  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:31.485852  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:31.610770  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:31.610918  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:31.685832  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:31.985437  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:32.110485  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:32.110599  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:32.185442  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 11:34:32.299577  619897 node_ready.go:57] node "addons-960652" has "Ready":"False" status (will retry)
	I0908 11:34:32.485705  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:32.611362  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:32.611505  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:32.685537  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:32.985818  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:33.111161  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:33.111319  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:33.185156  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:33.486648  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:33.610639  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:33.610824  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:33.686061  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:33.986379  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:34.110642  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:34.110692  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:34.185822  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 11:34:34.299930  619897 node_ready.go:57] node "addons-960652" has "Ready":"False" status (will retry)
	I0908 11:34:34.486405  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:34.610296  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:34.610343  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:34.685117  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:34.986807  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:35.111337  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:35.111351  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:35.185697  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:35.486452  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:35.610757  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:35.610886  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:35.686065  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:35.986070  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:36.111162  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:36.111173  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:36.185032  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 11:34:36.300054  619897 node_ready.go:57] node "addons-960652" has "Ready":"False" status (will retry)
	I0908 11:34:36.486110  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:36.611401  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:36.611484  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:36.685433  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:36.986659  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:37.111310  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:37.111319  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:37.185305  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:37.485534  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:37.611205  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:37.611406  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:37.685513  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:37.986746  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:38.111238  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:38.111328  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:38.185484  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:38.486481  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:38.610528  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:38.610582  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:38.685777  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 11:34:38.799783  619897 node_ready.go:57] node "addons-960652" has "Ready":"False" status (will retry)
	I0908 11:34:38.985814  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:39.111099  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:39.111211  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:39.184960  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:39.485895  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:39.610986  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:39.611241  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:39.684983  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:39.986703  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:40.110931  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:40.111224  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:40.185288  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:40.486673  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:40.598858  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 11:34:40.611418  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:40.611538  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:40.685256  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 11:34:40.800265  619897 node_ready.go:57] node "addons-960652" has "Ready":"False" status (will retry)
	I0908 11:34:40.986439  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:41.110739  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:41.110843  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 11:34:41.169150  619897 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:34:41.169186  619897 retry.go:31] will retry after 21.354000525s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:34:41.185419  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:41.485902  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:41.610981  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:41.611151  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:41.685151  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:41.986966  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:42.110695  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:42.110881  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:42.185061  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:42.486711  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:42.611199  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:42.611296  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:42.685246  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 11:34:42.800641  619897 node_ready.go:57] node "addons-960652" has "Ready":"False" status (will retry)
	I0908 11:34:42.985705  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:43.110875  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:43.110975  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:43.185671  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:43.486188  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:43.611223  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:43.611362  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:43.685289  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:43.985551  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:44.111036  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:44.111092  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:44.185069  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:44.486193  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:44.611179  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:44.611342  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:44.685580  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:44.986367  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:45.111345  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:45.111423  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:45.185125  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0908 11:34:45.300099  619897 node_ready.go:57] node "addons-960652" has "Ready":"False" status (will retry)
	I0908 11:34:45.491225  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:45.680531  619897 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0908 11:34:45.680557  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:45.682328  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:45.689241  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:45.805898  619897 node_ready.go:49] node "addons-960652" is "Ready"
	I0908 11:34:45.805944  619897 node_ready.go:38] duration metric: took 42.009613325s for node "addons-960652" to be "Ready" ...
	I0908 11:34:45.805965  619897 api_server.go:52] waiting for apiserver process to appear ...
	I0908 11:34:45.806033  619897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:34:45.892893  619897 api_server.go:72] duration metric: took 44.938504251s to wait for apiserver process to appear ...
	I0908 11:34:45.892927  619897 api_server.go:88] waiting for apiserver healthz status ...
	I0908 11:34:45.892957  619897 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0908 11:34:45.900123  619897 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0908 11:34:45.901502  619897 api_server.go:141] control plane version: v1.34.0
	I0908 11:34:45.901602  619897 api_server.go:131] duration metric: took 8.660932ms to wait for apiserver health ...
	I0908 11:34:45.901622  619897 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 11:34:45.981045  619897 system_pods.go:59] 20 kube-system pods found
	I0908 11:34:45.981171  619897 system_pods.go:61] "amd-gpu-device-plugin-z9mjk" [056bccc6-d6e9-439f-b5d5-7d4e79ed80ff] Pending
	I0908 11:34:45.981205  619897 system_pods.go:61] "coredns-66bc5c9577-np8sm" [dd077421-9a2f-4cf6-85b5-91e95acb5c29] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 11:34:45.981253  619897 system_pods.go:61] "csi-hostpath-attacher-0" [d0304260-a998-4367-bde0-ee3b3eeb6aeb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 11:34:45.981274  619897 system_pods.go:61] "csi-hostpath-resizer-0" [9c73ee63-d50a-4c38-9a2a-b8a18f03e052] Pending
	I0908 11:34:45.981291  619897 system_pods.go:61] "csi-hostpathplugin-x742h" [d29c6c4d-0ce4-4f2c-b140-84580bf95d82] Pending
	I0908 11:34:45.981307  619897 system_pods.go:61] "etcd-addons-960652" [68fb7a0f-5079-4c33-b5ff-babac14b19c1] Running
	I0908 11:34:45.981336  619897 system_pods.go:61] "kindnet-hcvll" [24187a08-ba32-41f3-8a6d-405f4ba5a615] Running
	I0908 11:34:45.981360  619897 system_pods.go:61] "kube-apiserver-addons-960652" [fab1fbbc-576e-4ad1-8ca9-cf5e6ea7ff14] Running
	I0908 11:34:45.981377  619897 system_pods.go:61] "kube-controller-manager-addons-960652" [5b8a7e15-9985-4560-9da8-e78346901b01] Running
	I0908 11:34:45.981394  619897 system_pods.go:61] "kube-ingress-dns-minikube" [812dce79-f0f6-4630-b5db-da829812269d] Pending
	I0908 11:34:45.981409  619897 system_pods.go:61] "kube-proxy-gz2w6" [0fc0f78a-c81b-49f8-8f9d-647638c4a0b3] Running
	I0908 11:34:45.981437  619897 system_pods.go:61] "kube-scheduler-addons-960652" [959edcf3-2df7-4a58-b606-8d0876bcdc63] Running
	I0908 11:34:45.981464  619897 system_pods.go:61] "metrics-server-85b7d694d7-mc5nf" [564706d3-c633-40a6-9289-585f1e5a5c7d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 11:34:45.981480  619897 system_pods.go:61] "nvidia-device-plugin-daemonset-hckn5" [8a17fe7a-39e5-454c-bb05-d21c5cd2db0e] Pending
	I0908 11:34:45.981494  619897 system_pods.go:61] "registry-66898fdd98-zbdzw" [eb08f47e-6098-4caa-a39d-4250b569612a] Pending
	I0908 11:34:45.981508  619897 system_pods.go:61] "registry-creds-764b6fb674-7tdwk" [86622412-d7b1-4c72-9645-325fcf7990fe] Pending
	I0908 11:34:45.981534  619897 system_pods.go:61] "registry-proxy-x79dt" [4e5f709b-3754-4bf9-a2ab-51c1f737b115] Pending
	I0908 11:34:45.981562  619897 system_pods.go:61] "snapshot-controller-7d9fbc56b8-857s7" [a9e55fef-8cf4-4f53-848b-015009283c41] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 11:34:45.981580  619897 system_pods.go:61] "snapshot-controller-7d9fbc56b8-8gd9h" [c3cd2f62-33df-4910-98f4-c986bbfb834d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 11:34:45.981594  619897 system_pods.go:61] "storage-provisioner" [8b8290c0-5abd-4d01-b1ff-2c6c84e01327] Pending
	I0908 11:34:45.981637  619897 system_pods.go:74] duration metric: took 80.006948ms to wait for pod list to return data ...
	I0908 11:34:45.981667  619897 default_sa.go:34] waiting for default service account to be created ...
	I0908 11:34:45.985381  619897 default_sa.go:45] found service account: "default"
	I0908 11:34:45.985411  619897 default_sa.go:55] duration metric: took 3.727336ms for default service account to be created ...
	I0908 11:34:45.985424  619897 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 11:34:45.992021  619897 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0908 11:34:45.992062  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:45.995245  619897 system_pods.go:86] 20 kube-system pods found
	I0908 11:34:45.995285  619897 system_pods.go:89] "amd-gpu-device-plugin-z9mjk" [056bccc6-d6e9-439f-b5d5-7d4e79ed80ff] Pending
	I0908 11:34:45.995301  619897 system_pods.go:89] "coredns-66bc5c9577-np8sm" [dd077421-9a2f-4cf6-85b5-91e95acb5c29] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 11:34:45.995310  619897 system_pods.go:89] "csi-hostpath-attacher-0" [d0304260-a998-4367-bde0-ee3b3eeb6aeb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 11:34:45.995319  619897 system_pods.go:89] "csi-hostpath-resizer-0" [9c73ee63-d50a-4c38-9a2a-b8a18f03e052] Pending
	I0908 11:34:45.995325  619897 system_pods.go:89] "csi-hostpathplugin-x742h" [d29c6c4d-0ce4-4f2c-b140-84580bf95d82] Pending
	I0908 11:34:45.995330  619897 system_pods.go:89] "etcd-addons-960652" [68fb7a0f-5079-4c33-b5ff-babac14b19c1] Running
	I0908 11:34:45.995337  619897 system_pods.go:89] "kindnet-hcvll" [24187a08-ba32-41f3-8a6d-405f4ba5a615] Running
	I0908 11:34:45.995350  619897 system_pods.go:89] "kube-apiserver-addons-960652" [fab1fbbc-576e-4ad1-8ca9-cf5e6ea7ff14] Running
	I0908 11:34:45.995356  619897 system_pods.go:89] "kube-controller-manager-addons-960652" [5b8a7e15-9985-4560-9da8-e78346901b01] Running
	I0908 11:34:45.995362  619897 system_pods.go:89] "kube-ingress-dns-minikube" [812dce79-f0f6-4630-b5db-da829812269d] Pending
	I0908 11:34:45.995376  619897 system_pods.go:89] "kube-proxy-gz2w6" [0fc0f78a-c81b-49f8-8f9d-647638c4a0b3] Running
	I0908 11:34:45.995382  619897 system_pods.go:89] "kube-scheduler-addons-960652" [959edcf3-2df7-4a58-b606-8d0876bcdc63] Running
	I0908 11:34:45.995397  619897 system_pods.go:89] "metrics-server-85b7d694d7-mc5nf" [564706d3-c633-40a6-9289-585f1e5a5c7d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 11:34:45.995404  619897 system_pods.go:89] "nvidia-device-plugin-daemonset-hckn5" [8a17fe7a-39e5-454c-bb05-d21c5cd2db0e] Pending
	I0908 11:34:45.995411  619897 system_pods.go:89] "registry-66898fdd98-zbdzw" [eb08f47e-6098-4caa-a39d-4250b569612a] Pending
	I0908 11:34:45.995415  619897 system_pods.go:89] "registry-creds-764b6fb674-7tdwk" [86622412-d7b1-4c72-9645-325fcf7990fe] Pending
	I0908 11:34:45.995420  619897 system_pods.go:89] "registry-proxy-x79dt" [4e5f709b-3754-4bf9-a2ab-51c1f737b115] Pending
	I0908 11:34:45.995428  619897 system_pods.go:89] "snapshot-controller-7d9fbc56b8-857s7" [a9e55fef-8cf4-4f53-848b-015009283c41] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 11:34:45.995443  619897 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8gd9h" [c3cd2f62-33df-4910-98f4-c986bbfb834d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 11:34:45.995450  619897 system_pods.go:89] "storage-provisioner" [8b8290c0-5abd-4d01-b1ff-2c6c84e01327] Pending
	I0908 11:34:45.995478  619897 retry.go:31] will retry after 284.87762ms: missing components: kube-dns
	I0908 11:34:46.110909  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:46.112086  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:46.187379  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:46.286844  619897 system_pods.go:86] 20 kube-system pods found
	I0908 11:34:46.286891  619897 system_pods.go:89] "amd-gpu-device-plugin-z9mjk" [056bccc6-d6e9-439f-b5d5-7d4e79ed80ff] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0908 11:34:46.286902  619897 system_pods.go:89] "coredns-66bc5c9577-np8sm" [dd077421-9a2f-4cf6-85b5-91e95acb5c29] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 11:34:46.286912  619897 system_pods.go:89] "csi-hostpath-attacher-0" [d0304260-a998-4367-bde0-ee3b3eeb6aeb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 11:34:46.286920  619897 system_pods.go:89] "csi-hostpath-resizer-0" [9c73ee63-d50a-4c38-9a2a-b8a18f03e052] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 11:34:46.286927  619897 system_pods.go:89] "csi-hostpathplugin-x742h" [d29c6c4d-0ce4-4f2c-b140-84580bf95d82] Pending
	I0908 11:34:46.286933  619897 system_pods.go:89] "etcd-addons-960652" [68fb7a0f-5079-4c33-b5ff-babac14b19c1] Running
	I0908 11:34:46.286938  619897 system_pods.go:89] "kindnet-hcvll" [24187a08-ba32-41f3-8a6d-405f4ba5a615] Running
	I0908 11:34:46.286944  619897 system_pods.go:89] "kube-apiserver-addons-960652" [fab1fbbc-576e-4ad1-8ca9-cf5e6ea7ff14] Running
	I0908 11:34:46.286957  619897 system_pods.go:89] "kube-controller-manager-addons-960652" [5b8a7e15-9985-4560-9da8-e78346901b01] Running
	I0908 11:34:46.286964  619897 system_pods.go:89] "kube-ingress-dns-minikube" [812dce79-f0f6-4630-b5db-da829812269d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 11:34:46.286971  619897 system_pods.go:89] "kube-proxy-gz2w6" [0fc0f78a-c81b-49f8-8f9d-647638c4a0b3] Running
	I0908 11:34:46.286977  619897 system_pods.go:89] "kube-scheduler-addons-960652" [959edcf3-2df7-4a58-b606-8d0876bcdc63] Running
	I0908 11:34:46.286984  619897 system_pods.go:89] "metrics-server-85b7d694d7-mc5nf" [564706d3-c633-40a6-9289-585f1e5a5c7d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 11:34:46.286990  619897 system_pods.go:89] "nvidia-device-plugin-daemonset-hckn5" [8a17fe7a-39e5-454c-bb05-d21c5cd2db0e] Pending
	I0908 11:34:46.286998  619897 system_pods.go:89] "registry-66898fdd98-zbdzw" [eb08f47e-6098-4caa-a39d-4250b569612a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 11:34:46.287008  619897 system_pods.go:89] "registry-creds-764b6fb674-7tdwk" [86622412-d7b1-4c72-9645-325fcf7990fe] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0908 11:34:46.287016  619897 system_pods.go:89] "registry-proxy-x79dt" [4e5f709b-3754-4bf9-a2ab-51c1f737b115] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0908 11:34:46.287088  619897 system_pods.go:89] "snapshot-controller-7d9fbc56b8-857s7" [a9e55fef-8cf4-4f53-848b-015009283c41] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 11:34:46.287098  619897 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8gd9h" [c3cd2f62-33df-4910-98f4-c986bbfb834d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 11:34:46.287112  619897 system_pods.go:89] "storage-provisioner" [8b8290c0-5abd-4d01-b1ff-2c6c84e01327] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 11:34:46.287137  619897 retry.go:31] will retry after 379.968765ms: missing components: kube-dns
	I0908 11:34:46.486677  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:46.685613  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:46.686389  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:46.687210  619897 system_pods.go:86] 20 kube-system pods found
	I0908 11:34:46.687242  619897 system_pods.go:89] "amd-gpu-device-plugin-z9mjk" [056bccc6-d6e9-439f-b5d5-7d4e79ed80ff] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0908 11:34:46.687269  619897 system_pods.go:89] "coredns-66bc5c9577-np8sm" [dd077421-9a2f-4cf6-85b5-91e95acb5c29] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 11:34:46.687285  619897 system_pods.go:89] "csi-hostpath-attacher-0" [d0304260-a998-4367-bde0-ee3b3eeb6aeb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 11:34:46.687298  619897 system_pods.go:89] "csi-hostpath-resizer-0" [9c73ee63-d50a-4c38-9a2a-b8a18f03e052] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 11:34:46.687312  619897 system_pods.go:89] "csi-hostpathplugin-x742h" [d29c6c4d-0ce4-4f2c-b140-84580bf95d82] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0908 11:34:46.687320  619897 system_pods.go:89] "etcd-addons-960652" [68fb7a0f-5079-4c33-b5ff-babac14b19c1] Running
	I0908 11:34:46.687333  619897 system_pods.go:89] "kindnet-hcvll" [24187a08-ba32-41f3-8a6d-405f4ba5a615] Running
	I0908 11:34:46.687341  619897 system_pods.go:89] "kube-apiserver-addons-960652" [fab1fbbc-576e-4ad1-8ca9-cf5e6ea7ff14] Running
	I0908 11:34:46.687347  619897 system_pods.go:89] "kube-controller-manager-addons-960652" [5b8a7e15-9985-4560-9da8-e78346901b01] Running
	I0908 11:34:46.687358  619897 system_pods.go:89] "kube-ingress-dns-minikube" [812dce79-f0f6-4630-b5db-da829812269d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 11:34:46.687368  619897 system_pods.go:89] "kube-proxy-gz2w6" [0fc0f78a-c81b-49f8-8f9d-647638c4a0b3] Running
	I0908 11:34:46.687374  619897 system_pods.go:89] "kube-scheduler-addons-960652" [959edcf3-2df7-4a58-b606-8d0876bcdc63] Running
	I0908 11:34:46.687385  619897 system_pods.go:89] "metrics-server-85b7d694d7-mc5nf" [564706d3-c633-40a6-9289-585f1e5a5c7d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 11:34:46.687395  619897 system_pods.go:89] "nvidia-device-plugin-daemonset-hckn5" [8a17fe7a-39e5-454c-bb05-d21c5cd2db0e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0908 11:34:46.687406  619897 system_pods.go:89] "registry-66898fdd98-zbdzw" [eb08f47e-6098-4caa-a39d-4250b569612a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 11:34:46.687418  619897 system_pods.go:89] "registry-creds-764b6fb674-7tdwk" [86622412-d7b1-4c72-9645-325fcf7990fe] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0908 11:34:46.687433  619897 system_pods.go:89] "registry-proxy-x79dt" [4e5f709b-3754-4bf9-a2ab-51c1f737b115] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0908 11:34:46.687441  619897 system_pods.go:89] "snapshot-controller-7d9fbc56b8-857s7" [a9e55fef-8cf4-4f53-848b-015009283c41] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 11:34:46.687452  619897 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8gd9h" [c3cd2f62-33df-4910-98f4-c986bbfb834d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 11:34:46.687466  619897 system_pods.go:89] "storage-provisioner" [8b8290c0-5abd-4d01-b1ff-2c6c84e01327] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 11:34:46.687487  619897 retry.go:31] will retry after 345.410441ms: missing components: kube-dns
	I0908 11:34:46.780566  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:46.985856  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:47.042361  619897 system_pods.go:86] 20 kube-system pods found
	I0908 11:34:47.042397  619897 system_pods.go:89] "amd-gpu-device-plugin-z9mjk" [056bccc6-d6e9-439f-b5d5-7d4e79ed80ff] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0908 11:34:47.042403  619897 system_pods.go:89] "coredns-66bc5c9577-np8sm" [dd077421-9a2f-4cf6-85b5-91e95acb5c29] Running
	I0908 11:34:47.042410  619897 system_pods.go:89] "csi-hostpath-attacher-0" [d0304260-a998-4367-bde0-ee3b3eeb6aeb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 11:34:47.042417  619897 system_pods.go:89] "csi-hostpath-resizer-0" [9c73ee63-d50a-4c38-9a2a-b8a18f03e052] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 11:34:47.042423  619897 system_pods.go:89] "csi-hostpathplugin-x742h" [d29c6c4d-0ce4-4f2c-b140-84580bf95d82] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0908 11:34:47.042427  619897 system_pods.go:89] "etcd-addons-960652" [68fb7a0f-5079-4c33-b5ff-babac14b19c1] Running
	I0908 11:34:47.042430  619897 system_pods.go:89] "kindnet-hcvll" [24187a08-ba32-41f3-8a6d-405f4ba5a615] Running
	I0908 11:34:47.042433  619897 system_pods.go:89] "kube-apiserver-addons-960652" [fab1fbbc-576e-4ad1-8ca9-cf5e6ea7ff14] Running
	I0908 11:34:47.042437  619897 system_pods.go:89] "kube-controller-manager-addons-960652" [5b8a7e15-9985-4560-9da8-e78346901b01] Running
	I0908 11:34:47.042443  619897 system_pods.go:89] "kube-ingress-dns-minikube" [812dce79-f0f6-4630-b5db-da829812269d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 11:34:47.042446  619897 system_pods.go:89] "kube-proxy-gz2w6" [0fc0f78a-c81b-49f8-8f9d-647638c4a0b3] Running
	I0908 11:34:47.042450  619897 system_pods.go:89] "kube-scheduler-addons-960652" [959edcf3-2df7-4a58-b606-8d0876bcdc63] Running
	I0908 11:34:47.042455  619897 system_pods.go:89] "metrics-server-85b7d694d7-mc5nf" [564706d3-c633-40a6-9289-585f1e5a5c7d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 11:34:47.042463  619897 system_pods.go:89] "nvidia-device-plugin-daemonset-hckn5" [8a17fe7a-39e5-454c-bb05-d21c5cd2db0e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0908 11:34:47.042469  619897 system_pods.go:89] "registry-66898fdd98-zbdzw" [eb08f47e-6098-4caa-a39d-4250b569612a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 11:34:47.042476  619897 system_pods.go:89] "registry-creds-764b6fb674-7tdwk" [86622412-d7b1-4c72-9645-325fcf7990fe] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0908 11:34:47.042481  619897 system_pods.go:89] "registry-proxy-x79dt" [4e5f709b-3754-4bf9-a2ab-51c1f737b115] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0908 11:34:47.042486  619897 system_pods.go:89] "snapshot-controller-7d9fbc56b8-857s7" [a9e55fef-8cf4-4f53-848b-015009283c41] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 11:34:47.042491  619897 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8gd9h" [c3cd2f62-33df-4910-98f4-c986bbfb834d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 11:34:47.042497  619897 system_pods.go:89] "storage-provisioner" [8b8290c0-5abd-4d01-b1ff-2c6c84e01327] Running
	I0908 11:34:47.042506  619897 system_pods.go:126] duration metric: took 1.057074321s to wait for k8s-apps to be running ...
	I0908 11:34:47.042516  619897 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 11:34:47.042562  619897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 11:34:47.054907  619897 system_svc.go:56] duration metric: took 12.376844ms WaitForService to wait for kubelet
	I0908 11:34:47.054943  619897 kubeadm.go:578] duration metric: took 46.100563827s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 11:34:47.054970  619897 node_conditions.go:102] verifying NodePressure condition ...
	I0908 11:34:47.058332  619897 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0908 11:34:47.058365  619897 node_conditions.go:123] node cpu capacity is 8
	I0908 11:34:47.058382  619897 node_conditions.go:105] duration metric: took 3.406035ms to run NodePressure ...
	I0908 11:34:47.058398  619897 start.go:241] waiting for startup goroutines ...
	I0908 11:34:47.112208  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:47.112291  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:47.185283  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:47.487397  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:47.611447  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:47.611509  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:47.685302  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:47.986859  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:48.111790  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:48.111845  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:48.212537  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:48.485762  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:48.611216  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:48.611231  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:48.685185  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:48.987322  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:49.111952  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:49.112028  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:49.185099  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:49.487146  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:49.611357  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:49.611425  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:49.685831  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:49.986520  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:50.111235  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:50.111566  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:50.185322  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:50.487144  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:50.611525  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:50.611562  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:50.685861  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:50.987117  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:51.111479  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:51.111710  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:51.184953  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:51.487418  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:51.612146  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:51.612194  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:51.685130  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:51.987053  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:52.111261  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:52.111714  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:52.212321  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:52.486467  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:52.611596  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:52.611741  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:52.685289  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:52.987204  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:53.111519  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:53.111677  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:53.185517  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:53.486524  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:53.610506  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:53.610565  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:53.685787  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:53.986777  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:54.111402  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:54.111489  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:54.185264  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:54.486911  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:54.611931  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:54.612774  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:54.685987  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:54.987513  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:55.181760  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:55.182437  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:55.185733  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:55.487716  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:55.610951  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:55.611106  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:55.685506  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:55.986295  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:56.111778  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:56.111836  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:56.185666  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:56.486006  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:56.612033  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:56.612153  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:56.685264  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:56.987838  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:57.111598  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:57.111679  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:57.185794  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:57.487146  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:57.611478  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:57.611627  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:57.685360  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:57.986392  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:58.113269  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:58.113444  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:58.185387  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:58.485940  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:58.611186  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:58.611323  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:58.685104  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:58.986673  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:59.111323  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:59.111375  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:59.185515  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:59.486068  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:34:59.611784  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:34:59.611959  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:34:59.686378  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:34:59.985886  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:00.111043  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:00.111095  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:00.184961  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:00.486898  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:00.611151  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:00.611215  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:00.685620  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:00.987881  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:01.183971  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:01.184269  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:01.185678  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:01.587361  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:01.680444  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:01.680624  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:01.686608  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:01.989266  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:02.186296  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:02.186419  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:02.186868  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:02.488007  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:02.523966  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 11:35:02.679439  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:02.679721  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:02.685647  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:02.986649  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:03.111080  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:03.111248  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:03.185213  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:03.486816  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:03.612565  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:03.612575  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:03.686155  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:03.985572  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:04.111532  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:04.111675  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:04.185282  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:04.302946  619897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.778927349s)
	W0908 11:35:04.302999  619897 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:35:04.303056  619897 retry.go:31] will retry after 10.891744842s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 11:35:04.486625  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:04.610788  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:04.610928  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:04.685568  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:04.986935  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:05.111418  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:05.111588  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:05.185420  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:05.486018  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:05.612212  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:05.612425  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:05.685384  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:05.987895  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:06.111392  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:06.111412  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:06.185292  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:06.487043  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:06.678720  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:06.679201  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:06.688006  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:06.987196  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:07.111496  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:07.111639  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:07.186150  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:07.487047  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:07.611480  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:07.611717  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:07.685625  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:07.986497  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:08.110672  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:08.110849  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:08.185434  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:08.486033  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:08.611454  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:08.611457  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:08.685572  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:08.986627  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:09.110829  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:09.110857  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:09.186049  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:09.486768  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:09.610976  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:09.611024  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:09.685950  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:09.986636  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:10.112589  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:10.112669  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:10.185518  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:10.486748  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:10.611138  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:10.611328  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:10.685310  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:10.987624  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:11.111587  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:11.111615  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:11.185594  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:11.486221  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:11.612143  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:11.612579  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:11.684970  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:11.987834  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:12.111066  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:12.111278  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:12.184825  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:12.487162  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:12.679998  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:12.680905  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:12.684487  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:12.987067  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:13.179709  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:13.180519  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:13.185146  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:13.486970  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:13.612196  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:13.612343  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:13.685581  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:13.986180  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:14.111891  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:14.176713  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:14.185621  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:14.486342  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:14.611285  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:14.611510  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:14.685104  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:14.987483  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:15.111637  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:15.111778  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:15.185828  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:15.195953  619897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 11:35:15.486548  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:15.611725  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:15.611885  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:15.685709  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:15.986655  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:16.111631  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:16.112272  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:16.185813  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:16.280386  619897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.084382124s)
	W0908 11:35:16.280447  619897 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0908 11:35:16.280578  619897 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0908 11:35:16.487315  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:16.611752  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:16.611920  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:16.685493  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:16.986242  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:17.111693  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:17.111850  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:17.186256  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:17.487542  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:17.610619  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:17.610686  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:17.685867  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:17.986852  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:18.112297  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:18.112314  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:18.185017  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:18.486546  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:18.611918  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:18.611995  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:18.686360  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:18.987509  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:19.110955  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:19.110989  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:19.184874  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:19.487093  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:19.611420  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:19.611538  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:19.685946  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:19.986558  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:20.110946  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:20.110960  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:20.185360  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:20.485902  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:20.611223  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:20.611385  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:20.685562  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:20.986095  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:21.112201  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:21.112247  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:21.185236  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:21.487233  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:21.611512  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:21.611572  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:21.685341  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:21.986013  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:22.111708  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 11:35:22.111812  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:22.186103  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:22.487680  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:22.610921  619897 kapi.go:107] duration metric: took 1m15.004130743s to wait for kubernetes.io/minikube-addons=registry ...
	I0908 11:35:22.610957  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:22.686046  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:22.987155  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:23.111875  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:23.185740  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:23.486982  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:23.611197  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:23.685448  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:23.986470  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:24.111988  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:24.186543  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:24.489037  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:24.611464  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:24.685753  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:24.986773  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:25.111427  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:25.211590  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:25.486293  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:25.611514  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:25.685757  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:25.987023  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:26.111366  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:26.185923  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:26.487353  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:26.612346  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:26.686339  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:26.987274  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:27.112518  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:27.185724  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:27.485886  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:27.610782  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:27.685679  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:27.986530  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:28.111630  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:28.185958  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:28.486562  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:28.612135  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:28.685038  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:28.986555  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:29.112325  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:29.185439  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:29.486574  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:29.612598  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:29.685178  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:29.987054  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:30.111386  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:30.185406  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:30.487581  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:30.680737  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:30.685999  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:30.987513  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:31.197154  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:31.199299  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:31.487047  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:31.678905  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:31.685737  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:31.987274  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:32.180471  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:32.185408  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:32.487437  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:32.611728  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:32.687138  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:32.986554  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:33.111901  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:33.186086  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:33.487250  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:33.611564  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:33.686019  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:33.986813  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:34.111349  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:34.185495  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:34.486931  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:34.611021  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:34.686422  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:34.986348  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:35.111795  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:35.185954  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:35.487585  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:35.611819  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:35.686342  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:35.987090  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:36.111232  619897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 11:35:36.185706  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:36.492311  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:36.611578  619897 kapi.go:107] duration metric: took 1m29.004278672s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0908 11:35:36.685432  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:36.985719  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:37.185744  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:37.486677  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:37.686085  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:37.987106  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:38.187053  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:38.487258  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:38.685086  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:38.986733  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:39.185944  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 11:35:39.487125  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:39.685639  619897 kapi.go:107] duration metric: took 1m27.503813529s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0908 11:35:39.687783  619897 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-960652 cluster.
	I0908 11:35:39.689241  619897 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0908 11:35:39.690786  619897 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0908 11:35:39.985950  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:40.487312  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:40.985961  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:41.487275  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:41.986449  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:42.487192  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:42.986372  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:43.486807  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:43.987160  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:44.486339  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:44.987281  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:45.487135  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:45.986240  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:46.486918  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:46.986137  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:47.486893  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:47.985973  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:48.486492  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:48.986167  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:49.487105  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:49.986408  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:50.486565  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:50.986940  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:51.487397  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:51.987134  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:52.486338  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:52.986294  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:53.486897  619897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 11:35:53.987356  619897 kapi.go:107] duration metric: took 1m45.004908227s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0908 11:35:53.989102  619897 out.go:179] * Enabled addons: amd-gpu-device-plugin, storage-provisioner, ingress-dns, cloud-spanner, nvidia-device-plugin, registry-creds, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0908 11:35:53.990257  619897 addons.go:514] duration metric: took 1m53.03583503s for enable addons: enabled=[amd-gpu-device-plugin storage-provisioner ingress-dns cloud-spanner nvidia-device-plugin registry-creds metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0908 11:35:53.990314  619897 start.go:246] waiting for cluster config update ...
	I0908 11:35:53.990336  619897 start.go:255] writing updated cluster config ...
	I0908 11:35:53.990625  619897 ssh_runner.go:195] Run: rm -f paused
	I0908 11:35:53.994735  619897 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 11:35:53.998298  619897 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-np8sm" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:35:54.003374  619897 pod_ready.go:94] pod "coredns-66bc5c9577-np8sm" is "Ready"
	I0908 11:35:54.003404  619897 pod_ready.go:86] duration metric: took 5.078062ms for pod "coredns-66bc5c9577-np8sm" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:35:54.006058  619897 pod_ready.go:83] waiting for pod "etcd-addons-960652" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:35:54.010406  619897 pod_ready.go:94] pod "etcd-addons-960652" is "Ready"
	I0908 11:35:54.010431  619897 pod_ready.go:86] duration metric: took 4.342035ms for pod "etcd-addons-960652" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:35:54.012580  619897 pod_ready.go:83] waiting for pod "kube-apiserver-addons-960652" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:35:54.016715  619897 pod_ready.go:94] pod "kube-apiserver-addons-960652" is "Ready"
	I0908 11:35:54.016737  619897 pod_ready.go:86] duration metric: took 4.134176ms for pod "kube-apiserver-addons-960652" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:35:54.018644  619897 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-960652" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:35:54.398415  619897 pod_ready.go:94] pod "kube-controller-manager-addons-960652" is "Ready"
	I0908 11:35:54.398443  619897 pod_ready.go:86] duration metric: took 379.776043ms for pod "kube-controller-manager-addons-960652" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:35:54.599247  619897 pod_ready.go:83] waiting for pod "kube-proxy-gz2w6" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:35:54.999851  619897 pod_ready.go:94] pod "kube-proxy-gz2w6" is "Ready"
	I0908 11:35:54.999881  619897 pod_ready.go:86] duration metric: took 400.608241ms for pod "kube-proxy-gz2w6" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:35:55.199982  619897 pod_ready.go:83] waiting for pod "kube-scheduler-addons-960652" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:35:55.598935  619897 pod_ready.go:94] pod "kube-scheduler-addons-960652" is "Ready"
	I0908 11:35:55.598968  619897 pod_ready.go:86] duration metric: took 398.947233ms for pod "kube-scheduler-addons-960652" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 11:35:55.598984  619897 pod_ready.go:40] duration metric: took 1.604216868s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 11:35:55.645728  619897 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0908 11:35:55.647591  619897 out.go:179] * Done! kubectl is now configured to use "addons-960652" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 08 11:39:56 addons-960652 crio[1049]: time="2025-09-08 11:39:56.044834860Z" level=info msg="Stopped pod sandbox (already stopped): c530aa43b1c67a536f9a240903940abeeb851181159f00fba20eb98f772357db" id=73253840-c03a-4c99-8bfe-be8a9a190958 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 08 11:39:56 addons-960652 crio[1049]: time="2025-09-08 11:39:56.045274108Z" level=info msg="Removing pod sandbox: c530aa43b1c67a536f9a240903940abeeb851181159f00fba20eb98f772357db" id=52501db9-447f-40a1-b753-39ee49ee21e2 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 08 11:39:56 addons-960652 crio[1049]: time="2025-09-08 11:39:56.052910589Z" level=info msg="Removed pod sandbox: c530aa43b1c67a536f9a240903940abeeb851181159f00fba20eb98f772357db" id=52501db9-447f-40a1-b753-39ee49ee21e2 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 08 11:40:16 addons-960652 crio[1049]: time="2025-09-08 11:40:16.786324513Z" level=info msg="Pulling image: docker.io/nginx:latest" id=9e023df0-322a-4abf-8b5f-b08791e2dc88 name=/runtime.v1.ImageService/PullImage
	Sep 08 11:40:16 addons-960652 crio[1049]: time="2025-09-08 11:40:16.790250203Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Sep 08 11:40:29 addons-960652 crio[1049]: time="2025-09-08 11:40:29.692735805Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=2c5b5937-a0b0-4dfa-a156-560cc9812e85 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:40:29 addons-960652 crio[1049]: time="2025-09-08 11:40:29.693706268Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=2c5b5937-a0b0-4dfa-a156-560cc9812e85 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:40:41 addons-960652 crio[1049]: time="2025-09-08 11:40:41.692364632Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=ade296cc-1c4d-4b53-8220-8bf876a4b02e name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:40:41 addons-960652 crio[1049]: time="2025-09-08 11:40:41.692683178Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=ade296cc-1c4d-4b53-8220-8bf876a4b02e name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:40:46 addons-960652 crio[1049]: time="2025-09-08 11:40:46.876455095Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=6c0bea20-f3b3-4ca3-82b5-c7d59ed40765 name=/runtime.v1.ImageService/PullImage
	Sep 08 11:40:46 addons-960652 crio[1049]: time="2025-09-08 11:40:46.880809003Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Sep 08 11:41:28 addons-960652 crio[1049]: time="2025-09-08 11:41:28.692310693Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=79ab8678-136f-434b-b036-2ac0d524e214 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:41:28 addons-960652 crio[1049]: time="2025-09-08 11:41:28.692667538Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=79ab8678-136f-434b-b036-2ac0d524e214 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:41:39 addons-960652 crio[1049]: time="2025-09-08 11:41:39.692402758Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=24d2cd4f-ca65-41de-b6e5-34ab339643bc name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:41:39 addons-960652 crio[1049]: time="2025-09-08 11:41:39.692709899Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=24d2cd4f-ca65-41de-b6e5-34ab339643bc name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:41:51 addons-960652 crio[1049]: time="2025-09-08 11:41:51.691756254Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=742b725f-fb8a-4841-b58a-dc409befd550 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:41:51 addons-960652 crio[1049]: time="2025-09-08 11:41:51.692003408Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=742b725f-fb8a-4841-b58a-dc409befd550 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:42:02 addons-960652 crio[1049]: time="2025-09-08 11:42:02.691599397Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=cc9988c3-60bc-4da2-a9fe-1b1b89247900 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:42:02 addons-960652 crio[1049]: time="2025-09-08 11:42:02.691851622Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=cc9988c3-60bc-4da2-a9fe-1b1b89247900 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:42:02 addons-960652 crio[1049]: time="2025-09-08 11:42:02.692571729Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=f164e06b-0b75-4d04-b363-d8c5f4f8f68b name=/runtime.v1.ImageService/PullImage
	Sep 08 11:42:02 addons-960652 crio[1049]: time="2025-09-08 11:42:02.698536125Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Sep 08 11:42:32 addons-960652 crio[1049]: time="2025-09-08 11:42:32.833863539Z" level=info msg="Pulling image: docker.io/nginx:latest" id=664aa04d-51ec-417e-9277-f5849020b0aa name=/runtime.v1.ImageService/PullImage
	Sep 08 11:42:32 addons-960652 crio[1049]: time="2025-09-08 11:42:32.839026552Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Sep 08 11:42:47 addons-960652 crio[1049]: time="2025-09-08 11:42:47.691467562Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=40a8bf02-bcc1-4410-8c97-c426f8f1edd0 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:42:47 addons-960652 crio[1049]: time="2025-09-08 11:42:47.691817282Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=40a8bf02-bcc1-4410-8c97-c426f8f1edd0 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	040337eeadd3e       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                                              6 minutes ago       Running             nginx                                    0                   cbe026d90e099       nginx
	3f1bd2139b6f5       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          6 minutes ago       Running             busybox                                  0                   ff5b5b008dfeb       busybox
	e225dd1353c5e       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          6 minutes ago       Running             csi-snapshotter                          0                   19bf91601224b       csi-hostpathplugin-x742h
	a3e0daff14684       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          6 minutes ago       Running             csi-provisioner                          0                   19bf91601224b       csi-hostpathplugin-x742h
	dc98ed0921633       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            7 minutes ago       Running             liveness-probe                           0                   19bf91601224b       csi-hostpathplugin-x742h
	2ad1b170c6333       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           7 minutes ago       Running             hostpath                                 0                   19bf91601224b       csi-hostpathplugin-x742h
	e9b58257111f3       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                7 minutes ago       Running             node-driver-registrar                    0                   19bf91601224b       csi-hostpathplugin-x742h
	d6f7e7940414a       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5768867a6776f266b9c9c6b8b32a069f9346493f7bede50c3dcb28859f36d506                            7 minutes ago       Running             gadget                                   0                   db85239541cfa       gadget-jrm97
	62b0c13b35dd5       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              7 minutes ago       Running             csi-resizer                              0                   3aabeaf53b3c1       csi-hostpath-resizer-0
	f7f788c90a319       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      7 minutes ago       Running             volume-snapshot-controller               0                   a030436d0600c       snapshot-controller-7d9fbc56b8-8gd9h
	a9adb3ab45f72       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      7 minutes ago       Running             volume-snapshot-controller               0                   d8b6af65b1141       snapshot-controller-7d9fbc56b8-857s7
	475dea4776563       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             7 minutes ago       Running             local-path-provisioner                   0                   c19df1d4f546e       local-path-provisioner-648f6765c9-2dzb2
	f3d401e158a14       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   7 minutes ago       Running             csi-external-health-monitor-controller   0                   19bf91601224b       csi-hostpathplugin-x742h
	026097ca491b2       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             7 minutes ago       Running             csi-attacher                             0                   a190356160f21       csi-hostpath-attacher-0
	15020f93f9117       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             8 minutes ago       Running             coredns                                  0                   28fe3efc23b77       coredns-66bc5c9577-np8sm
	197348fff491e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             8 minutes ago       Running             storage-provisioner                      0                   02a8ce96831d2       storage-provisioner
	5d7f0a6932d37       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             8 minutes ago       Running             kindnet-cni                              0                   3c6ebe59944d3       kindnet-hcvll
	599d0409ebde1       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                                             8 minutes ago       Running             kube-proxy                               0                   54443dfaf20e7       kube-proxy-gz2w6
	eb1813ea98da6       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             9 minutes ago       Running             etcd                                     0                   6114837aea7c4       etcd-addons-960652
	d629720d8e10e       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                                             9 minutes ago       Running             kube-scheduler                           0                   351287eb6cb47       kube-scheduler-addons-960652
	f34ea2a919bfd       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                                             9 minutes ago       Running             kube-controller-manager                  0                   63ffabb2c612c       kube-controller-manager-addons-960652
	b1c29c4267f5b       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                                             9 minutes ago       Running             kube-apiserver                           0                   e7a8b62c1e67e       kube-apiserver-addons-960652
	
	
	==> coredns [15020f93f911792a0cf0f70bbcb201066db25d4f586c88da47522ac9476fe8cd] <==
	[INFO] 10.244.0.20:55789 - 45503 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.009761105s
	[INFO] 10.244.0.20:45928 - 6020 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006868993s
	[INFO] 10.244.0.20:40392 - 34198 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00737584s
	[INFO] 10.244.0.20:41376 - 344 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007315672s
	[INFO] 10.244.0.20:49505 - 56346 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007317779s
	[INFO] 10.244.0.20:55789 - 53722 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007447395s
	[INFO] 10.244.0.20:55469 - 46135 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007707426s
	[INFO] 10.244.0.20:34626 - 44825 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006688581s
	[INFO] 10.244.0.20:37284 - 24259 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007650469s
	[INFO] 10.244.0.20:45928 - 2419 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.008440145s
	[INFO] 10.244.0.20:55789 - 43424 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007980346s
	[INFO] 10.244.0.20:37284 - 9002 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.008304861s
	[INFO] 10.244.0.20:34626 - 51224 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007040599s
	[INFO] 10.244.0.20:41376 - 33076 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.008323549s
	[INFO] 10.244.0.20:40392 - 62296 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.008621673s
	[INFO] 10.244.0.20:49505 - 23716 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.008278941s
	[INFO] 10.244.0.20:55469 - 42017 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.008042031s
	[INFO] 10.244.0.20:55789 - 26689 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000184831s
	[INFO] 10.244.0.20:40392 - 36924 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000146422s
	[INFO] 10.244.0.20:34626 - 36429 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000270047s
	[INFO] 10.244.0.20:45928 - 32850 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000132765s
	[INFO] 10.244.0.20:55469 - 58604 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000134364s
	[INFO] 10.244.0.20:49505 - 3526 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00015464s
	[INFO] 10.244.0.20:37284 - 64630 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00029907s
	[INFO] 10.244.0.20:41376 - 54475 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000110207s
	
	
	==> describe nodes <==
	Name:               addons-960652
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-960652
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a399eb27affc71ce2737faeeac659fc2ce938c64
	                    minikube.k8s.io/name=addons-960652
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T11_33_56_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-960652
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-960652"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 11:33:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-960652
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 11:42:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 11:36:59 +0000   Mon, 08 Sep 2025 11:33:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 11:36:59 +0000   Mon, 08 Sep 2025 11:33:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 11:36:59 +0000   Mon, 08 Sep 2025 11:33:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 11:36:59 +0000   Mon, 08 Sep 2025 11:34:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-960652
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	System Info:
	  Machine ID:                 d355e2ae02d844869ee2220acbc5d523
	  System UUID:                f41e624b-4547-4702-9350-59a549f70159
	  Boot ID:                    1bb31c1a-3b78-4ea1-9977-d0689f279875
	  Kernel Version:             5.15.0-1083-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (19 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m55s
	  default                     hello-world-app-5d498dc89-p2zlh            0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m15s
	  default                     task-pv-pod                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  gadget                      gadget-jrm97                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m45s
	  kube-system                 coredns-66bc5c9577-np8sm                   100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     8m50s
	  kube-system                 csi-hostpath-attacher-0                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m43s
	  kube-system                 csi-hostpath-resizer-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m43s
	  kube-system                 csi-hostpathplugin-x742h                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m6s
	  kube-system                 etcd-addons-960652                         100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8m56s
	  kube-system                 kindnet-hcvll                              100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      8m50s
	  kube-system                 kube-apiserver-addons-960652               250m (3%)     0 (0%)      0 (0%)           0 (0%)         8m57s
	  kube-system                 kube-controller-manager-addons-960652      200m (2%)     0 (0%)      0 (0%)           0 (0%)         8m57s
	  kube-system                 kube-proxy-gz2w6                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m50s
	  kube-system                 kube-scheduler-addons-960652               100m (1%)     0 (0%)      0 (0%)           0 (0%)         8m57s
	  kube-system                 snapshot-controller-7d9fbc56b8-857s7       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m43s
	  kube-system                 snapshot-controller-7d9fbc56b8-8gd9h       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m43s
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m45s
	  local-path-storage          local-path-provisioner-648f6765c9-2dzb2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 8m45s                kube-proxy       
	  Normal   Starting                 9m2s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m2s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m2s (x8 over 9m2s)  kubelet          Node addons-960652 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m2s (x8 over 9m2s)  kubelet          Node addons-960652 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m2s (x8 over 9m2s)  kubelet          Node addons-960652 status is now: NodeHasSufficientPID
	  Normal   Starting                 8m56s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m56s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m56s                kubelet          Node addons-960652 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m56s                kubelet          Node addons-960652 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m56s                kubelet          Node addons-960652 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           8m52s                node-controller  Node addons-960652 event: Registered Node addons-960652 in Controller
	  Normal   NodeReady                8m6s                 kubelet          Node addons-960652 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: 56 2e be 2d 78 d9 92 4f 95 93 e9 d3 08 00
	[  +0.003955] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-f00e2e5550ba
	[  +0.000005] ll header: 00000000: 56 2e be 2d 78 d9 92 4f 95 93 e9 d3 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-f00e2e5550ba
	[  +0.000001] ll header: 00000000: 56 2e be 2d 78 d9 92 4f 95 93 e9 d3 08 00
	[  +8.187305] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-f00e2e5550ba
	[  +0.000030] ll header: 00000000: 56 2e be 2d 78 d9 92 4f 95 93 e9 d3 08 00
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-f00e2e5550ba
	[  +0.000006] ll header: 00000000: 56 2e be 2d 78 d9 92 4f 95 93 e9 d3 08 00
	[  +0.000008] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-f00e2e5550ba
	[  +0.000002] ll header: 00000000: 56 2e be 2d 78 d9 92 4f 95 93 e9 d3 08 00
	[Sep 8 11:36] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fe e6 1d 9e e2 09 2a b3 fe f6 b6 ae 08 00
	[  +1.022122] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fe e6 1d 9e e2 09 2a b3 fe f6 b6 ae 08 00
	[  +2.019826] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fe e6 1d 9e e2 09 2a b3 fe f6 b6 ae 08 00
	[  +4.219629] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fe e6 1d 9e e2 09 2a b3 fe f6 b6 ae 08 00
	[Sep 8 11:37] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: fe e6 1d 9e e2 09 2a b3 fe f6 b6 ae 08 00
	[ +16.130550] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fe e6 1d 9e e2 09 2a b3 fe f6 b6 ae 08 00
	[ +33.273137] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fe e6 1d 9e e2 09 2a b3 fe f6 b6 ae 08 00
	
	
	==> etcd [eb1813ea98da6d09685dd2a84a69cc589435452d8367cd32201b9d54d966eaf9] <==
	{"level":"warn","ts":"2025-09-08T11:34:05.094092Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"316.065321ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/registry-creds\" limit:1 ","response":"range_response_count:1 size:7265"}
	{"level":"warn","ts":"2025-09-08T11:34:05.094171Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"294.854758ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-hcvll\" limit:1 ","response":"range_response_count:1 size:5305"}
	{"level":"warn","ts":"2025-09-08T11:34:05.094203Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.079115ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-09-08T11:34:05.095705Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"296.08127ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-08T11:34:05.099828Z","caller":"traceutil/trace.go:172","msg":"trace[375842253] range","detail":"{range_begin:/registry/deployments/kube-system/registry; range_end:; response_count:0; response_revision:434; }","duration":"417.715612ms","start":"2025-09-08T11:34:04.682098Z","end":"2025-09-08T11:34:05.099813Z","steps":["trace[375842253] 'agreement among raft nodes before linearized reading'  (duration: 299.511246ms)","trace[375842253] 'range keys from in-memory index tree'  (duration: 111.993014ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-08T11:34:05.177083Z","caller":"traceutil/trace.go:172","msg":"trace[4141782] range","detail":"{range_begin:/registry/deployments/kube-system/registry-creds; range_end:; response_count:1; response_revision:434; }","duration":"399.054807ms","start":"2025-09-08T11:34:04.778006Z","end":"2025-09-08T11:34:05.177061Z","steps":["trace[4141782] 'agreement among raft nodes before linearized reading'  (duration: 203.943485ms)","trace[4141782] 'range keys from in-memory index tree'  (duration: 112.022001ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-08T11:34:05.177216Z","caller":"traceutil/trace.go:172","msg":"trace[764876706] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-hcvll; range_end:; response_count:1; response_revision:434; }","duration":"377.897225ms","start":"2025-09-08T11:34:04.799310Z","end":"2025-09-08T11:34:05.177207Z","steps":["trace[764876706] 'agreement among raft nodes before linearized reading'  (duration: 182.632567ms)","trace[764876706] 'range keys from in-memory index tree'  (duration: 112.183126ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-08T11:34:05.178257Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-08T11:34:04.799289Z","time spent":"378.942309ms","remote":"127.0.0.1:55838","response type":"/etcdserverpb.KV/Range","request count":0,"request size":44,"response count":1,"response size":5329,"request content":"key:\"/registry/pods/kube-system/kindnet-hcvll\" limit:1 "}
	{"level":"warn","ts":"2025-09-08T11:34:05.178481Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-08T11:34:04.682090Z","time spent":"496.379004ms","remote":"127.0.0.1:56376","response type":"/etcdserverpb.KV/Range","request count":0,"request size":46,"response count":0,"response size":29,"request content":"key:\"/registry/deployments/kube-system/registry\" limit:1 "}
	{"level":"warn","ts":"2025-09-08T11:34:05.178629Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-08T11:34:04.777987Z","time spent":"400.612532ms","remote":"127.0.0.1:56376","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":1,"response size":7289,"request content":"key:\"/registry/deployments/kube-system/registry-creds\" limit:1 "}
	{"level":"info","ts":"2025-09-08T11:34:05.177398Z","caller":"traceutil/trace.go:172","msg":"trace[1196628434] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:0; response_revision:434; }","duration":"194.26934ms","start":"2025-09-08T11:34:04.983118Z","end":"2025-09-08T11:34:05.177387Z","steps":["trace[1196628434] 'range keys from in-memory index tree'  (duration: 111.003409ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T11:34:05.179505Z","caller":"traceutil/trace.go:172","msg":"trace[729211829] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:434; }","duration":"379.876324ms","start":"2025-09-08T11:34:04.799612Z","end":"2025-09-08T11:34:05.179488Z","steps":["trace[729211829] 'agreement among raft nodes before linearized reading'  (duration: 182.321546ms)","trace[729211829] 'range keys from in-memory index tree'  (duration: 113.742696ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-08T11:34:05.179756Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-08T11:34:04.799604Z","time spent":"380.134014ms","remote":"127.0.0.1:55740","response type":"/etcdserverpb.KV/Range","request count":0,"request size":25,"response count":0,"response size":29,"request content":"key:\"/registry/limitranges\" limit:1 "}
	{"level":"warn","ts":"2025-09-08T11:34:05.888187Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"102.580507ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/registry\" limit:1 ","response":"range_response_count:1 size:3351"}
	{"level":"info","ts":"2025-09-08T11:34:05.888370Z","caller":"traceutil/trace.go:172","msg":"trace[2075139565] range","detail":"{range_begin:/registry/deployments/kube-system/registry; range_end:; response_count:1; response_revision:495; }","duration":"102.774288ms","start":"2025-09-08T11:34:05.785577Z","end":"2025-09-08T11:34:05.888351Z","steps":["trace[2075139565] 'agreement among raft nodes before linearized reading'  (duration: 102.460147ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T11:34:06.981604Z","caller":"traceutil/trace.go:172","msg":"trace[140328556] transaction","detail":"{read_only:false; response_revision:611; number_of_response:1; }","duration":"104.602292ms","start":"2025-09-08T11:34:06.876983Z","end":"2025-09-08T11:34:06.981586Z","steps":["trace[140328556] 'process raft request'  (duration: 103.826859ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T11:34:06.982008Z","caller":"traceutil/trace.go:172","msg":"trace[1025344530] transaction","detail":"{read_only:false; response_revision:612; number_of_response:1; }","duration":"103.565338ms","start":"2025-09-08T11:34:06.878414Z","end":"2025-09-08T11:34:06.981979Z","steps":["trace[1025344530] 'process raft request'  (duration: 102.70568ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T11:34:06.982384Z","caller":"traceutil/trace.go:172","msg":"trace[40552992] transaction","detail":"{read_only:false; response_revision:613; number_of_response:1; }","duration":"103.629373ms","start":"2025-09-08T11:34:06.878738Z","end":"2025-09-08T11:34:06.982367Z","steps":["trace[40552992] 'process raft request'  (duration: 102.433256ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T11:34:09.399024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:34:09.419310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:34:29.926253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:34:29.933456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:34:29.957274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51944","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-08T11:35:25.876478Z","caller":"traceutil/trace.go:172","msg":"trace[623256269] transaction","detail":"{read_only:false; response_revision:1162; number_of_response:1; }","duration":"148.215321ms","start":"2025-09-08T11:35:25.728243Z","end":"2025-09-08T11:35:25.876458Z","steps":["trace[623256269] 'process raft request'  (duration: 148.037395ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T11:35:31.195182Z","caller":"traceutil/trace.go:172","msg":"trace[1241191236] transaction","detail":"{read_only:false; response_revision:1184; number_of_response:1; }","duration":"112.862332ms","start":"2025-09-08T11:35:31.082299Z","end":"2025-09-08T11:35:31.195161Z","steps":["trace[1241191236] 'process raft request'  (duration: 112.473492ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:42:51 up  2:25,  0 users,  load average: 0.15, 0.69, 2.87
	Linux addons-960652 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [5d7f0a6932d372d22b05cc05eee35bf50224e6b5f78b8ad555a511f89befb62e] <==
	I0908 11:40:45.299832       1 main.go:301] handling current node
	I0908 11:40:55.304506       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:40:55.304549       1 main.go:301] handling current node
	I0908 11:41:05.303703       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:41:05.303751       1 main.go:301] handling current node
	I0908 11:41:15.298670       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:41:15.298721       1 main.go:301] handling current node
	I0908 11:41:25.306092       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:41:25.306147       1 main.go:301] handling current node
	I0908 11:41:35.299124       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:41:35.299164       1 main.go:301] handling current node
	I0908 11:41:45.302214       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:41:45.302250       1 main.go:301] handling current node
	I0908 11:41:55.304544       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:41:55.304596       1 main.go:301] handling current node
	I0908 11:42:05.302221       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:42:05.302267       1 main.go:301] handling current node
	I0908 11:42:15.305940       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:42:15.305984       1 main.go:301] handling current node
	I0908 11:42:25.303803       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:42:25.303841       1 main.go:301] handling current node
	I0908 11:42:35.300769       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:42:35.300824       1 main.go:301] handling current node
	I0908 11:42:45.303769       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:42:45.303806       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b1c29c4267f5b8e8626d17c51f00a91014b42f7e5064ceb3dd7f9e9035b18520] <==
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0908 11:34:52.083750       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0908 11:34:56.265950       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:35:16.001662       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:36:04.201341       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0908 11:36:06.378192       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:47012: use of closed network connection
	E0908 11:36:06.553294       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:47030: use of closed network connection
	I0908 11:36:15.711979       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.0.75"}
	I0908 11:36:36.618832       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0908 11:36:36.818630       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.102.130"}
	I0908 11:36:38.404920       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:36:53.087754       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0908 11:37:05.409304       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:37:42.638086       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:38:35.019218       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:38:56.787115       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.162.248"}
	E0908 11:39:00.214823       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I0908 11:39:08.803565       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:39:47.272796       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:40:24.263422       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:41:01.433072       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:41:34.529835       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:42:25.013232       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [f34ea2a919bfd72165d68dd34013bba56722328d504a8ed43f59355ccd0b9579] <==
	I0908 11:33:59.909794       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-960652"
	I0908 11:33:59.909835       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0908 11:33:59.909850       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0908 11:33:59.909985       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0908 11:33:59.910133       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0908 11:33:59.910254       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0908 11:33:59.910289       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0908 11:33:59.911034       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0908 11:33:59.912920       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 11:33:59.914370       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0908 11:33:59.935049       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 11:34:05.498092       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="default/cloud-spanner-emulator" err="EndpointSlice informer cache is out of date"
	E0908 11:34:06.397177       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E0908 11:34:29.918356       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 11:34:29.918512       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0908 11:34:29.918553       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I0908 11:34:29.946811       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0908 11:34:29.950895       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I0908 11:34:30.018939       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 11:34:30.051123       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 11:34:49.986216       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0908 11:36:19.876777       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	I0908 11:36:41.521251       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I0908 11:36:44.128028       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	I0908 11:39:10.944638       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	
	
	==> kube-proxy [599d0409ebde1f2bd14aa540b01ef6e24cd7100d818dbd600daed98df824357a] <==
	I0908 11:34:05.394445       1 server_linux.go:53] "Using iptables proxy"
	I0908 11:34:06.178921       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 11:34:06.279441       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 11:34:06.279503       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0908 11:34:06.279758       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 11:34:06.490818       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 11:34:06.490978       1 server_linux.go:132] "Using iptables Proxier"
	I0908 11:34:06.576226       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 11:34:06.576832       1 server.go:527] "Version info" version="v1.34.0"
	I0908 11:34:06.577304       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 11:34:06.579321       1 config.go:200] "Starting service config controller"
	I0908 11:34:06.581303       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 11:34:06.580020       1 config.go:106] "Starting endpoint slice config controller"
	I0908 11:34:06.581494       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 11:34:06.580647       1 config.go:309] "Starting node config controller"
	I0908 11:34:06.581583       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 11:34:06.581615       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 11:34:06.580045       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 11:34:06.581691       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 11:34:06.681528       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 11:34:06.681735       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0908 11:34:06.681780       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [d629720d8e10ef56f43bfb64a71c210e8766e43becacccb06e78365c2f7da60e] <==
	E0908 11:33:53.182101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0908 11:33:53.182256       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0908 11:33:53.182425       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0908 11:33:53.182546       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0908 11:33:53.182774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0908 11:33:53.182812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0908 11:33:53.182888       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0908 11:33:53.182980       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0908 11:33:53.183058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0908 11:33:53.183171       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0908 11:33:53.183286       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0908 11:33:53.183413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0908 11:33:53.183468       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0908 11:33:53.183521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0908 11:33:53.183568       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0908 11:33:53.185770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0908 11:33:54.007333       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0908 11:33:54.058736       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0908 11:33:54.096543       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0908 11:33:54.111349       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0908 11:33:54.136614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0908 11:33:54.149771       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0908 11:33:54.259506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0908 11:33:54.259505       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I0908 11:33:54.598099       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 08 11:41:45 addons-960652 kubelet[1680]: E0908 11:41:45.895627    1680 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757331705895297554  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:579278}  inodes_used:{value:224}}"
	Sep 08 11:41:45 addons-960652 kubelet[1680]: E0908 11:41:45.895680    1680 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757331705895297554  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:579278}  inodes_used:{value:224}}"
	Sep 08 11:41:51 addons-960652 kubelet[1680]: E0908 11:41:51.692347    1680 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/echo-server:1.0\\\": ErrImagePull: reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-world-app-5d498dc89-p2zlh" podUID="8a3572b2-f36f-4bfe-a4d4-6472fc661464"
	Sep 08 11:41:55 addons-960652 kubelet[1680]: E0908 11:41:55.692859    1680 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="990d30a1-800d-4554-930c-b8e09bd450c0"
	Sep 08 11:41:55 addons-960652 kubelet[1680]: E0908 11:41:55.807301    1680 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/468828eb3d563dc8601fc09735ea2b87471e1a190d935021aaf91d13f2ce1a20/diff" to get inode usage: stat /var/lib/containers/storage/overlay/468828eb3d563dc8601fc09735ea2b87471e1a190d935021aaf91d13f2ce1a20/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 11:41:55 addons-960652 kubelet[1680]: E0908 11:41:55.890396    1680 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/06363a2fd3da9322aaad2e85a7d9996daf2f234e13a155920af5ce7b4a3d1456/diff" to get inode usage: stat /var/lib/containers/storage/overlay/06363a2fd3da9322aaad2e85a7d9996daf2f234e13a155920af5ce7b4a3d1456/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 11:41:55 addons-960652 kubelet[1680]: E0908 11:41:55.898564    1680 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757331715898161738  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:579278}  inodes_used:{value:224}}"
	Sep 08 11:41:55 addons-960652 kubelet[1680]: E0908 11:41:55.898599    1680 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757331715898161738  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:579278}  inodes_used:{value:224}}"
	Sep 08 11:41:55 addons-960652 kubelet[1680]: E0908 11:41:55.903178    1680 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/468828eb3d563dc8601fc09735ea2b87471e1a190d935021aaf91d13f2ce1a20/diff" to get inode usage: stat /var/lib/containers/storage/overlay/468828eb3d563dc8601fc09735ea2b87471e1a190d935021aaf91d13f2ce1a20/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 11:41:55 addons-960652 kubelet[1680]: E0908 11:41:55.907439    1680 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/06363a2fd3da9322aaad2e85a7d9996daf2f234e13a155920af5ce7b4a3d1456/diff" to get inode usage: stat /var/lib/containers/storage/overlay/06363a2fd3da9322aaad2e85a7d9996daf2f234e13a155920af5ce7b4a3d1456/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 11:42:05 addons-960652 kubelet[1680]: E0908 11:42:05.901844    1680 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757331725901439852  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:579278}  inodes_used:{value:224}}"
	Sep 08 11:42:05 addons-960652 kubelet[1680]: E0908 11:42:05.901888    1680 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757331725901439852  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:579278}  inodes_used:{value:224}}"
	Sep 08 11:42:15 addons-960652 kubelet[1680]: E0908 11:42:15.905601    1680 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757331735905216948  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:579278}  inodes_used:{value:224}}"
	Sep 08 11:42:15 addons-960652 kubelet[1680]: E0908 11:42:15.905644    1680 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757331735905216948  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:579278}  inodes_used:{value:224}}"
	Sep 08 11:42:25 addons-960652 kubelet[1680]: E0908 11:42:25.908159    1680 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757331745907793081  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:579278}  inodes_used:{value:224}}"
	Sep 08 11:42:25 addons-960652 kubelet[1680]: E0908 11:42:25.908201    1680 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757331745907793081  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:579278}  inodes_used:{value:224}}"
	Sep 08 11:42:32 addons-960652 kubelet[1680]: E0908 11:42:32.833364    1680 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kicbase/echo-server:1.0"
	Sep 08 11:42:32 addons-960652 kubelet[1680]: E0908 11:42:32.833440    1680 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kicbase/echo-server:1.0"
	Sep 08 11:42:32 addons-960652 kubelet[1680]: E0908 11:42:32.833708    1680 kuberuntime_manager.go:1449] "Unhandled Error" err="container hello-world-app start failed in pod hello-world-app-5d498dc89-p2zlh_default(8a3572b2-f36f-4bfe-a4d4-6472fc661464): ErrImagePull: reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 08 11:42:32 addons-960652 kubelet[1680]: E0908 11:42:32.833769    1680 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ErrImagePull: \"reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-world-app-5d498dc89-p2zlh" podUID="8a3572b2-f36f-4bfe-a4d4-6472fc661464"
	Sep 08 11:42:35 addons-960652 kubelet[1680]: E0908 11:42:35.911269    1680 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757331755910918757  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:579278}  inodes_used:{value:224}}"
	Sep 08 11:42:35 addons-960652 kubelet[1680]: E0908 11:42:35.911308    1680 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757331755910918757  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:579278}  inodes_used:{value:224}}"
	Sep 08 11:42:45 addons-960652 kubelet[1680]: E0908 11:42:45.914111    1680 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757331765913694063  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:579278}  inodes_used:{value:224}}"
	Sep 08 11:42:45 addons-960652 kubelet[1680]: E0908 11:42:45.914149    1680 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757331765913694063  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:579278}  inodes_used:{value:224}}"
	Sep 08 11:42:47 addons-960652 kubelet[1680]: E0908 11:42:47.692160    1680 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/echo-server:1.0\\\": ErrImagePull: reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-world-app-5d498dc89-p2zlh" podUID="8a3572b2-f36f-4bfe-a4d4-6472fc661464"
	
	
	==> storage-provisioner [197348fff491ed8e82c4dfed081b5664cda583220b70b29b5f53b687db96e7ab] <==
	W0908 11:42:27.129064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:42:29.132775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:42:29.137908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:42:31.141375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:42:31.147473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:42:33.150966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:42:33.157122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:42:35.162874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:42:35.167635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:42:37.171229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:42:37.175804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:42:39.179729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:42:39.184131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:42:41.187449       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:42:41.191897       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:42:43.195582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:42:43.201651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:42:45.205091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:42:45.209625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:42:47.213805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:42:47.218424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:42:49.223039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:42:49.227903       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:42:51.231225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:42:51.237110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-960652 -n addons-960652
helpers_test.go:269: (dbg) Run:  kubectl --context addons-960652 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-p2zlh task-pv-pod
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-960652 describe pod hello-world-app-5d498dc89-p2zlh task-pv-pod
helpers_test.go:290: (dbg) kubectl --context addons-960652 describe pod hello-world-app-5d498dc89-p2zlh task-pv-pod:

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-p2zlh
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-960652/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 11:38:56 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.31
	IPs:
	  IP:           10.244.0.31
	Controlled By:  ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zp9ts (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-zp9ts:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m56s                default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-p2zlh to addons-960652
	  Normal   Pulling    50s (x4 over 3m56s)  kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	  Warning  Failed     20s (x4 over 3m21s)  kubelet            Failed to pull image "docker.io/kicbase/echo-server:1.0": reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     20s (x4 over 3m21s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    5s (x6 over 3m20s)   kubelet            Back-off pulling image "docker.io/kicbase/echo-server:1.0"
	  Warning  Failed     5s (x6 over 3m20s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-960652/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 11:36:49 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.30
	IPs:
	  IP:  10.244.0.30
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wx6s5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-wx6s5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m3s                  default-scheduler  Successfully assigned default/task-pv-pod to addons-960652
	  Warning  Failed     5m32s                 kubelet            Failed to pull image "docker.io/nginx": initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m6s (x4 over 5m32s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m6s (x3 over 4m46s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    57s (x10 over 5m32s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     57s (x10 over 5m32s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    45s (x5 over 6m2s)    kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestAddons/parallel/CSI FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-960652 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-960652 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-960652 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.771033848s)
--- FAIL: TestAddons/parallel/CSI (387.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-982703 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-982703 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-982703 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-982703 --alsologtostderr -v=1] stderr:
I0908 11:52:35.260191  664154 out.go:360] Setting OutFile to fd 1 ...
I0908 11:52:35.260457  664154 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:52:35.260467  664154 out.go:374] Setting ErrFile to fd 2...
I0908 11:52:35.260471  664154 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:52:35.260673  664154 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-614854/.minikube/bin
I0908 11:52:35.260947  664154 mustload.go:65] Loading cluster: functional-982703
I0908 11:52:35.261343  664154 config.go:182] Loaded profile config "functional-982703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 11:52:35.261752  664154 cli_runner.go:164] Run: docker container inspect functional-982703 --format={{.State.Status}}
I0908 11:52:35.279783  664154 host.go:66] Checking if "functional-982703" exists ...
I0908 11:52:35.280045  664154 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0908 11:52:35.333569  664154 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-09-08 11:52:35.323330753 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0908 11:52:35.333742  664154 api_server.go:166] Checking apiserver status ...
I0908 11:52:35.333816  664154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0908 11:52:35.333868  664154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-982703
I0908 11:52:35.353336  664154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/functional-982703/id_rsa Username:docker}
I0908 11:52:35.447716  664154 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5432/cgroup
I0908 11:52:35.458229  664154 api_server.go:182] apiserver freezer: "6:freezer:/docker/620b8d39c764727eda17f94afadd827084592cb86562df734fadfb4c869ae49d/crio/crio-6cc77c69d4b6b4a1fb131f9556f8c040fa5ebad2302358d862f48e91f8348ccd"
I0908 11:52:35.458307  664154 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/620b8d39c764727eda17f94afadd827084592cb86562df734fadfb4c869ae49d/crio/crio-6cc77c69d4b6b4a1fb131f9556f8c040fa5ebad2302358d862f48e91f8348ccd/freezer.state
I0908 11:52:35.468377  664154 api_server.go:204] freezer state: "THAWED"
I0908 11:52:35.468414  664154 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I0908 11:52:35.473155  664154 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W0908 11:52:35.473213  664154 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I0908 11:52:35.473388  664154 config.go:182] Loaded profile config "functional-982703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 11:52:35.473414  664154 addons.go:69] Setting dashboard=true in profile "functional-982703"
I0908 11:52:35.473425  664154 addons.go:238] Setting addon dashboard=true in "functional-982703"
I0908 11:52:35.473452  664154 host.go:66] Checking if "functional-982703" exists ...
I0908 11:52:35.473772  664154 cli_runner.go:164] Run: docker container inspect functional-982703 --format={{.State.Status}}
I0908 11:52:35.494997  664154 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0908 11:52:35.496410  664154 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0908 11:52:35.497756  664154 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0908 11:52:35.497778  664154 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0908 11:52:35.497857  664154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-982703
I0908 11:52:35.517245  664154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/functional-982703/id_rsa Username:docker}
I0908 11:52:35.618566  664154 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0908 11:52:35.618595  664154 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0908 11:52:35.638334  664154 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0908 11:52:35.638366  664154 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0908 11:52:35.656278  664154 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0908 11:52:35.656308  664154 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0908 11:52:35.674617  664154 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0908 11:52:35.674647  664154 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0908 11:52:35.693274  664154 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0908 11:52:35.693308  664154 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0908 11:52:35.712635  664154 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0908 11:52:35.712662  664154 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0908 11:52:35.731707  664154 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0908 11:52:35.731742  664154 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0908 11:52:35.751694  664154 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0908 11:52:35.751728  664154 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0908 11:52:35.770782  664154 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0908 11:52:35.770809  664154 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0908 11:52:35.789752  664154 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0908 11:52:36.379641  664154 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-982703 addons enable metrics-server

                                                
                                                
I0908 11:52:36.380987  664154 addons.go:201] Writing out "functional-982703" config to set dashboard=true...
W0908 11:52:36.381252  664154 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I0908 11:52:36.381955  664154 kapi.go:59] client config for functional-982703: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21512-614854/.minikube/profiles/functional-982703/client.crt", KeyFile:"/home/jenkins/minikube-integration/21512-614854/.minikube/profiles/functional-982703/client.key", CAFile:"/home/jenkins/minikube-integration/21512-614854/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25a3920), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0908 11:52:36.382417  664154 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0908 11:52:36.382442  664154 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0908 11:52:36.382448  664154 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I0908 11:52:36.382453  664154 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0908 11:52:36.382458  664154 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0908 11:52:36.391007  664154 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  1bbb0911-c9e6-4dbd-a3c7-850b99ce2f35 1261 0 2025-09-08 11:52:36 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-09-08 11:52:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.98.26.124,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.98.26.124],IPFamilies:[IPv4],AllocateLoadBalancerN
odePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0908 11:52:36.391181  664154 out.go:285] * Launching proxy ...
* Launching proxy ...
I0908 11:52:36.391240  664154 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-982703 proxy --port 36195]
I0908 11:52:36.391503  664154 dashboard.go:157] Waiting for kubectl to output host:port ...
I0908 11:52:36.436984  664154 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W0908 11:52:36.437073  664154 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I0908 11:52:36.446046  664154 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f312cc3d-0052-4b84-a25f-4116d98a4954] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:52:36 GMT]] Body:0xc000c82f00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000546780 TLS:<nil>}
I0908 11:52:36.446163  664154 retry.go:31] will retry after 111.452µs: Temporary Error: unexpected response code: 503
I0908 11:52:36.449773  664154 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[101eee2c-36f9-478b-9384-91a41f4e54ae] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:52:36 GMT]] Body:0xc000a77b80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00045c140 TLS:<nil>}
I0908 11:52:36.449827  664154 retry.go:31] will retry after 157.193µs: Temporary Error: unexpected response code: 503
I0908 11:52:36.453945  664154 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d2d5754c-41ab-43c2-9aae-0d7406e25123] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:52:36 GMT]] Body:0xc00012ae40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206f00 TLS:<nil>}
I0908 11:52:36.454015  664154 retry.go:31] will retry after 322.294µs: Temporary Error: unexpected response code: 503
I0908 11:52:36.457795  664154 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d7a2133a-b53c-45e0-a55e-a778b1c95c78] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:52:36 GMT]] Body:0xc000c83040 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005468c0 TLS:<nil>}
I0908 11:52:36.457863  664154 retry.go:31] will retry after 283.151µs: Temporary Error: unexpected response code: 503
I0908 11:52:36.461473  664154 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1d7762a0-ae5b-4b45-968c-eccb17992ad3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:52:36 GMT]] Body:0xc000a77c40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00045c280 TLS:<nil>}
I0908 11:52:36.461529  664154 retry.go:31] will retry after 397.483µs: Temporary Error: unexpected response code: 503
I0908 11:52:36.465107  664154 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5e9cb846-5192-4c6e-a8ae-6961d2110b7f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:52:36 GMT]] Body:0xc00012afc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207040 TLS:<nil>}
I0908 11:52:36.465176  664154 retry.go:31] will retry after 850.118µs: Temporary Error: unexpected response code: 503
I0908 11:52:36.468425  664154 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5ab5caf5-7116-4e8a-9d86-1257b0fa565e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:52:36 GMT]] Body:0xc0009540c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000546a00 TLS:<nil>}
I0908 11:52:36.468474  664154 retry.go:31] will retry after 639.509µs: Temporary Error: unexpected response code: 503
I0908 11:52:36.471622  664154 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c238e713-527e-4ca6-b372-c8a99079c7d2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:52:36 GMT]] Body:0xc00012b080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207180 TLS:<nil>}
I0908 11:52:36.471699  664154 retry.go:31] will retry after 2.472021ms: Temporary Error: unexpected response code: 503
I0908 11:52:36.477346  664154 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[20934c1e-e9f9-4958-bb7f-83d92c03e849] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:52:36 GMT]] Body:0xc00012b140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000546b40 TLS:<nil>}
I0908 11:52:36.477400  664154 retry.go:31] will retry after 1.618094ms: Temporary Error: unexpected response code: 503
I0908 11:52:36.481847  664154 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ababbc80-55e0-463b-8a9e-f2072076ff34] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:52:36 GMT]] Body:0xc000c831c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000546c80 TLS:<nil>}
I0908 11:52:36.481912  664154 retry.go:31] will retry after 5.341114ms: Temporary Error: unexpected response code: 503
I0908 11:52:36.490683  664154 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5d312a61-52f4-4d75-a52e-179fef020325] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:52:36 GMT]] Body:0xc0009541c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00045c3c0 TLS:<nil>}
I0908 11:52:36.490748  664154 retry.go:31] will retry after 5.194792ms: Temporary Error: unexpected response code: 503
I0908 11:52:36.499685  664154 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2584497a-4498-41f0-a892-cb94d0e9db76] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:52:36 GMT]] Body:0xc000c832c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002072c0 TLS:<nil>}
I0908 11:52:36.499751  664154 retry.go:31] will retry after 9.394158ms: Temporary Error: unexpected response code: 503
I0908 11:52:36.512825  664154 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5113ed87-7483-42fa-a1fa-82ab290a2833] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:52:36 GMT]] Body:0xc000954340 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00045c640 TLS:<nil>}
I0908 11:52:36.512899  664154 retry.go:31] will retry after 15.580054ms: Temporary Error: unexpected response code: 503
I0908 11:52:36.532431  664154 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3c634340-5190-4ffe-ad8e-59ca00ec9cef] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:52:36 GMT]] Body:0xc000c833c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207680 TLS:<nil>}
I0908 11:52:36.532512  664154 retry.go:31] will retry after 11.361509ms: Temporary Error: unexpected response code: 503
I0908 11:52:36.548366  664154 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[521f40a9-7fdb-477a-90c9-e896cc3e9632] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:52:36 GMT]] Body:0xc000c83480 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00045c780 TLS:<nil>}
I0908 11:52:36.548442  664154 retry.go:31] will retry after 17.340566ms: Temporary Error: unexpected response code: 503
I0908 11:52:36.570066  664154 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ef02a294-771e-4764-aa15-a7a096db154e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:52:36 GMT]] Body:0xc00012b280 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00045ca00 TLS:<nil>}
I0908 11:52:36.570142  664154 retry.go:31] will retry after 38.263768ms: Temporary Error: unexpected response code: 503
I0908 11:52:36.612671  664154 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[df7cd416-560e-4f8c-ac95-2d58ef118679] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:52:36 GMT]] Body:0xc000c83540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000546dc0 TLS:<nil>}
I0908 11:52:36.612754  664154 retry.go:31] will retry after 90.40894ms: Temporary Error: unexpected response code: 503
I0908 11:52:36.707433  664154 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d48a61fc-9ef3-41e0-9507-e5bb278df57c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:52:36 GMT]] Body:0xc000c83600 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00045d040 TLS:<nil>}
I0908 11:52:36.707504  664154 retry.go:31] will retry after 109.967801ms: Temporary Error: unexpected response code: 503
I0908 11:52:36.821013  664154 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[03880e06-5ea3-46fb-98df-93f23e014953] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:52:36 GMT]] Body:0xc000954580 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00045d2c0 TLS:<nil>}
I0908 11:52:36.821111  664154 retry.go:31] will retry after 124.609182ms: Temporary Error: unexpected response code: 503
I0908 11:52:36.950290  664154 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f87ed2cd-df76-4327-bb9f-68266576302c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:52:36 GMT]] Body:0xc000c83700 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002077c0 TLS:<nil>}
I0908 11:52:36.950371  664154 retry.go:31] will retry after 263.954022ms: Temporary Error: unexpected response code: 503
I0908 11:52:37.218160  664154 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[eb0c3985-174a-4040-84e3-31f611c04f08] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:52:37 GMT]] Body:0xc000954700 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00045d400 TLS:<nil>}
I0908 11:52:37.218226  664154 retry.go:31] will retry after 380.770726ms: Temporary Error: unexpected response code: 503
I0908 11:52:37.603394  664154 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[53ef397f-17a7-448f-91f4-3af2dfe22a9a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:52:37 GMT]] Body:0xc000c83800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207900 TLS:<nil>}
I0908 11:52:37.603477  664154 retry.go:31] will retry after 275.966983ms: Temporary Error: unexpected response code: 503
I0908 11:52:37.883072  664154 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0e6f17e0-9916-4344-a98e-dbfe470bd76c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:52:37 GMT]] Body:0xc000954800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00045d680 TLS:<nil>}
I0908 11:52:37.883154  664154 retry.go:31] will retry after 611.179523ms: Temporary Error: unexpected response code: 503
I0908 11:52:38.498274  664154 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d2f968fc-e81e-4d57-bf93-730c0e15fc62] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:52:38 GMT]] Body:0xc00012b380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207a40 TLS:<nil>}
I0908 11:52:38.498343  664154 retry.go:31] will retry after 1.182232836s: Temporary Error: unexpected response code: 503
I0908 11:52:39.684884  664154 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[902207c1-5516-413e-94d6-898ae8fd565f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:52:39 GMT]] Body:0xc00012b440 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000546f00 TLS:<nil>}
I0908 11:52:39.684954  664154 retry.go:31] will retry after 1.600486849s: Temporary Error: unexpected response code: 503
I0908 11:52:41.288603  664154 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[36bd04e7-6e8b-4dab-a31f-e62f01f46a9a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:52:41 GMT]] Body:0xc000954980 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000547040 TLS:<nil>}
I0908 11:52:41.288667  664154 retry.go:31] will retry after 2.737835403s: Temporary Error: unexpected response code: 503
I0908 11:52:44.030839  664154 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6de2e175-a166-427e-bcaa-b547149e0b81] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:52:44 GMT]] Body:0xc00012b540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207b80 TLS:<nil>}
I0908 11:52:44.030902  664154 retry.go:31] will retry after 3.756811447s: Temporary Error: unexpected response code: 503
I0908 11:52:47.791952  664154 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[795c2193-10af-495c-9448-b683495d9d81] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:52:47 GMT]] Body:0xc00012b5c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207cc0 TLS:<nil>}
I0908 11:52:47.792040  664154 retry.go:31] will retry after 4.35633563s: Temporary Error: unexpected response code: 503
I0908 11:52:52.152151  664154 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[54c33e8f-8cd7-472b-8362-2e94e3d9421f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:52:52 GMT]] Body:0xc000c83900 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000547180 TLS:<nil>}
I0908 11:52:52.152260  664154 retry.go:31] will retry after 10.714752567s: Temporary Error: unexpected response code: 503
I0908 11:53:02.872694  664154 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e59f8392-a6b6-438d-8a13-b1ecce413e6b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:53:02 GMT]] Body:0xc00012b700 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005472c0 TLS:<nil>}
I0908 11:53:02.872761  664154 retry.go:31] will retry after 10.364375453s: Temporary Error: unexpected response code: 503
I0908 11:53:13.242451  664154 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[15892ca8-9f3c-4f05-8ac3-d3b14443ce71] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:53:13 GMT]] Body:0xc000954cc0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000547400 TLS:<nil>}
I0908 11:53:13.242549  664154 retry.go:31] will retry after 27.806439545s: Temporary Error: unexpected response code: 503
I0908 11:53:41.053207  664154 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7415b31f-0f59-491f-9cd9-df3eb62dcb93] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:53:41 GMT]] Body:0xc000c83980 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000547540 TLS:<nil>}
I0908 11:53:41.053283  664154 retry.go:31] will retry after 32.234904072s: Temporary Error: unexpected response code: 503
I0908 11:54:13.294742  664154 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1102dcb2-9da8-4bae-8887-760b4f1fcf3d] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:54:13 GMT]] Body:0xc000c83a00 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207e00 TLS:<nil>}
I0908 11:54:13.294837  664154 retry.go:31] will retry after 51.675596726s: Temporary Error: unexpected response code: 503
I0908 11:55:04.975183  664154 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[970fdfda-1742-491a-b010-059591b8af3f] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:55:04 GMT]] Body:0xc000b340c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000546000 TLS:<nil>}
I0908 11:55:04.975260  664154 retry.go:31] will retry after 53.788608016s: Temporary Error: unexpected response code: 503
I0908 11:55:58.770091  664154 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d3020418-3731-47d0-9355-b948310a4d06] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:55:58 GMT]] Body:0xc00012a240 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004ac000 TLS:<nil>}
I0908 11:55:58.770206  664154 retry.go:31] will retry after 1m12.459030573s: Temporary Error: unexpected response code: 503
I0908 11:57:11.233216  664154 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[305f4868-d2fa-458e-af09-34901f8e4ad0] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:57:11 GMT]] Body:0xc00012a200 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004ac140 TLS:<nil>}
I0908 11:57:11.233318  664154 retry.go:31] will retry after 36.509310855s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-982703
helpers_test.go:243: (dbg) docker inspect functional-982703:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "620b8d39c764727eda17f94afadd827084592cb86562df734fadfb4c869ae49d",
	        "Created": "2025-09-08T11:43:57.942130108Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 647387,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-08T11:43:57.979949835Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:863fa02c4a7dcd4571b30c16c1e6ae3eaa1d1a904931aac9472133ae3328c098",
	        "ResolvConfPath": "/var/lib/docker/containers/620b8d39c764727eda17f94afadd827084592cb86562df734fadfb4c869ae49d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/620b8d39c764727eda17f94afadd827084592cb86562df734fadfb4c869ae49d/hostname",
	        "HostsPath": "/var/lib/docker/containers/620b8d39c764727eda17f94afadd827084592cb86562df734fadfb4c869ae49d/hosts",
	        "LogPath": "/var/lib/docker/containers/620b8d39c764727eda17f94afadd827084592cb86562df734fadfb4c869ae49d/620b8d39c764727eda17f94afadd827084592cb86562df734fadfb4c869ae49d-json.log",
	        "Name": "/functional-982703",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-982703:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-982703",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "620b8d39c764727eda17f94afadd827084592cb86562df734fadfb4c869ae49d",
	                "LowerDir": "/var/lib/docker/overlay2/01e0ebe16f32e52d426b0fe3cff4efd0a586f9e9c8094b52873f4bcee3589eff-init/diff:/var/lib/docker/overlay2/e9bf8e09f770f60ed4c810e0202629dccf6a4c304822cce5a8dffd900ae12eff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/01e0ebe16f32e52d426b0fe3cff4efd0a586f9e9c8094b52873f4bcee3589eff/merged",
	                "UpperDir": "/var/lib/docker/overlay2/01e0ebe16f32e52d426b0fe3cff4efd0a586f9e9c8094b52873f4bcee3589eff/diff",
	                "WorkDir": "/var/lib/docker/overlay2/01e0ebe16f32e52d426b0fe3cff4efd0a586f9e9c8094b52873f4bcee3589eff/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-982703",
	                "Source": "/var/lib/docker/volumes/functional-982703/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-982703",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-982703",
	                "name.minikube.sigs.k8s.io": "functional-982703",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6902825b2bb740d3c0467667496661373e1d2904a6767d52684e83e116edad23",
	            "SandboxKey": "/var/run/docker/netns/6902825b2bb7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-982703": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:24:1a:fe:e6:95",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9662c4988a658ac6b4213aad8f235c9b158bb736b526a267a844b3d243ae232c",
	                    "EndpointID": "ecd42e44f56051e04408d02689c47c58881299836b72fa0b768c7a3a98b3eb81",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-982703",
	                        "620b8d39c764"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-982703 -n functional-982703
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-982703 logs -n 25: (1.494664372s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                       ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-982703 ssh -- ls -la /mount-9p                                                                         │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:52 UTC │ 08 Sep 25 11:52 UTC │
	│ ssh            │ functional-982703 ssh sudo umount -f /mount-9p                                                                    │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:52 UTC │                     │
	│ mount          │ -p functional-982703 /tmp/TestFunctionalparallelMountCmdVerifyCleanup366583128/001:/mount1 --alsologtostderr -v=1 │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:52 UTC │                     │
	│ ssh            │ functional-982703 ssh findmnt -T /mount1                                                                          │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:52 UTC │                     │
	│ mount          │ -p functional-982703 /tmp/TestFunctionalparallelMountCmdVerifyCleanup366583128/001:/mount2 --alsologtostderr -v=1 │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:52 UTC │                     │
	│ mount          │ -p functional-982703 /tmp/TestFunctionalparallelMountCmdVerifyCleanup366583128/001:/mount3 --alsologtostderr -v=1 │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:52 UTC │                     │
	│ ssh            │ functional-982703 ssh findmnt -T /mount1                                                                          │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:52 UTC │ 08 Sep 25 11:52 UTC │
	│ ssh            │ functional-982703 ssh findmnt -T /mount2                                                                          │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:52 UTC │ 08 Sep 25 11:52 UTC │
	│ ssh            │ functional-982703 ssh findmnt -T /mount3                                                                          │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:52 UTC │ 08 Sep 25 11:52 UTC │
	│ mount          │ -p functional-982703 --kill=true                                                                                  │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:52 UTC │                     │
	│ service        │ functional-982703 service list                                                                                    │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:56 UTC │ 08 Sep 25 11:56 UTC │
	│ service        │ functional-982703 service list -o json                                                                            │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:56 UTC │ 08 Sep 25 11:56 UTC │
	│ service        │ functional-982703 service --namespace=default --https --url hello-node                                            │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:56 UTC │                     │
	│ service        │ functional-982703 service hello-node --url --format={{.IP}}                                                       │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:56 UTC │                     │
	│ service        │ functional-982703 service hello-node --url                                                                        │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:56 UTC │                     │
	│ update-context │ functional-982703 update-context --alsologtostderr -v=2                                                           │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:56 UTC │ 08 Sep 25 11:56 UTC │
	│ update-context │ functional-982703 update-context --alsologtostderr -v=2                                                           │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:56 UTC │ 08 Sep 25 11:56 UTC │
	│ update-context │ functional-982703 update-context --alsologtostderr -v=2                                                           │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:56 UTC │ 08 Sep 25 11:56 UTC │
	│ image          │ functional-982703 image ls --format short --alsologtostderr                                                       │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:56 UTC │ 08 Sep 25 11:56 UTC │
	│ image          │ functional-982703 image ls --format yaml --alsologtostderr                                                        │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:56 UTC │ 08 Sep 25 11:56 UTC │
	│ ssh            │ functional-982703 ssh pgrep buildkitd                                                                             │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:56 UTC │                     │
	│ image          │ functional-982703 image ls --format json --alsologtostderr                                                        │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:56 UTC │ 08 Sep 25 11:56 UTC │
	│ image          │ functional-982703 image build -t localhost/my-image:functional-982703 testdata/build --alsologtostderr            │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:56 UTC │ 08 Sep 25 11:56 UTC │
	│ image          │ functional-982703 image ls --format table --alsologtostderr                                                       │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:56 UTC │ 08 Sep 25 11:56 UTC │
	│ image          │ functional-982703 image ls                                                                                        │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:56 UTC │ 08 Sep 25 11:56 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 11:52:35
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 11:52:35.101685  664079 out.go:360] Setting OutFile to fd 1 ...
	I0908 11:52:35.102075  664079 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:52:35.102106  664079 out.go:374] Setting ErrFile to fd 2...
	I0908 11:52:35.102114  664079 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:52:35.102913  664079 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-614854/.minikube/bin
	I0908 11:52:35.104207  664079 out.go:368] Setting JSON to false
	I0908 11:52:35.105302  664079 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":9299,"bootTime":1757323056,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 11:52:35.105427  664079 start.go:140] virtualization: kvm guest
	I0908 11:52:35.107319  664079 out.go:179] * [functional-982703] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	I0908 11:52:35.108875  664079 notify.go:220] Checking for updates...
	I0908 11:52:35.108926  664079 out.go:179]   - MINIKUBE_LOCATION=21512
	I0908 11:52:35.110462  664079 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 11:52:35.111752  664079 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21512-614854/kubeconfig
	I0908 11:52:35.112962  664079 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-614854/.minikube
	I0908 11:52:35.114299  664079 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 11:52:35.115722  664079 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 11:52:35.117669  664079 config.go:182] Loaded profile config "functional-982703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 11:52:35.118354  664079 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 11:52:35.142804  664079 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 11:52:35.142929  664079 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 11:52:35.194775  664079 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-09-08 11:52:35.185335554 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 11:52:35.194894  664079 docker.go:318] overlay module found
	I0908 11:52:35.196779  664079 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0908 11:52:35.198028  664079 start.go:304] selected driver: docker
	I0908 11:52:35.198047  664079 start.go:918] validating driver "docker" against &{Name:functional-982703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-982703 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 11:52:35.198174  664079 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 11:52:35.200642  664079 out.go:203] 
	W0908 11:52:35.202109  664079 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0908 11:52:35.203607  664079 out.go:203] 
	
	
	==> CRI-O <==
	Sep 08 11:56:38 functional-982703 crio[4885]: time="2025-09-08 11:56:38.962415763Z" level=info msg="Trying to access \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Sep 08 11:56:39 functional-982703 crio[4885]: time="2025-09-08 11:56:39.883235750Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=9579c881-1088-4e9b-ac8c-5222fdaf1c23 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:56:39 functional-982703 crio[4885]: time="2025-09-08 11:56:39.883505703Z" level=info msg="Image docker.io/nginx:alpine not found" id=9579c881-1088-4e9b-ac8c-5222fdaf1c23 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:56:46 functional-982703 crio[4885]: time="2025-09-08 11:56:46.883583498Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=7960a81a-36a8-45e2-baa7-5bb3c2fb59ac name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:56:46 functional-982703 crio[4885]: time="2025-09-08 11:56:46.883904485Z" level=info msg="Image docker.io/mysql:5.7 not found" id=7960a81a-36a8-45e2-baa7-5bb3c2fb59ac name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:56:50 functional-982703 crio[4885]: time="2025-09-08 11:56:50.883677683Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=aad5a2d1-92ad-497a-9c4a-7259a92b8e7c name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:56:50 functional-982703 crio[4885]: time="2025-09-08 11:56:50.883699298Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=ed9a1310-2ba7-453c-a103-808a06bb4b47 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:56:50 functional-982703 crio[4885]: time="2025-09-08 11:56:50.883980033Z" level=info msg="Image docker.io/nginx:alpine not found" id=aad5a2d1-92ad-497a-9c4a-7259a92b8e7c name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:56:50 functional-982703 crio[4885]: time="2025-09-08 11:56:50.884030193Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=ed9a1310-2ba7-453c-a103-808a06bb4b47 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:56:59 functional-982703 crio[4885]: time="2025-09-08 11:56:59.883379444Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=f6bca582-a16e-44d7-8915-76d800058bfa name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:56:59 functional-982703 crio[4885]: time="2025-09-08 11:56:59.883711985Z" level=info msg="Image docker.io/mysql:5.7 not found" id=f6bca582-a16e-44d7-8915-76d800058bfa name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:57:02 functional-982703 crio[4885]: time="2025-09-08 11:57:02.882878876Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=0ed272f6-2f8f-4ad6-b93a-0bfff440cc1c name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:57:02 functional-982703 crio[4885]: time="2025-09-08 11:57:02.883257792Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=0ed272f6-2f8f-4ad6-b93a-0bfff440cc1c name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:57:09 functional-982703 crio[4885]: time="2025-09-08 11:57:09.044432500Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=cd6ac774-8278-47c3-b16f-73aa0f61ab41 name=/runtime.v1.ImageService/PullImage
	Sep 08 11:57:09 functional-982703 crio[4885]: time="2025-09-08 11:57:09.049638217Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Sep 08 11:57:10 functional-982703 crio[4885]: time="2025-09-08 11:57:10.882963074Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=9c6b55f8-02a2-49ac-99f2-0fec07a7bf69 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:57:10 functional-982703 crio[4885]: time="2025-09-08 11:57:10.883200023Z" level=info msg="Image docker.io/mysql:5.7 not found" id=9c6b55f8-02a2-49ac-99f2-0fec07a7bf69 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:57:14 functional-982703 crio[4885]: time="2025-09-08 11:57:14.884707347Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=60416622-e9d7-4f96-af4e-7a38e5eec165 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:57:14 functional-982703 crio[4885]: time="2025-09-08 11:57:14.884961896Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=60416622-e9d7-4f96-af4e-7a38e5eec165 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:57:23 functional-982703 crio[4885]: time="2025-09-08 11:57:23.882982676Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=4179a69c-8889-4f5b-af06-90cc82413f65 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:57:23 functional-982703 crio[4885]: time="2025-09-08 11:57:23.883311515Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=4179a69c-8889-4f5b-af06-90cc82413f65 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:57:25 functional-982703 crio[4885]: time="2025-09-08 11:57:25.882910618Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=545ce73b-681d-40d3-8565-8cbf2e29af20 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:57:25 functional-982703 crio[4885]: time="2025-09-08 11:57:25.882958588Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=9f8229e4-5a54-42a4-afcf-023e042051b4 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:57:25 functional-982703 crio[4885]: time="2025-09-08 11:57:25.883178858Z" level=info msg="Image docker.io/mysql:5.7 not found" id=545ce73b-681d-40d3-8565-8cbf2e29af20 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:57:25 functional-982703 crio[4885]: time="2025-09-08 11:57:25.883220740Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=9f8229e4-5a54-42a4-afcf-023e042051b4 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	26a0c977230c0       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   4 minutes ago       Exited              mount-munger              0                   08c62cdff7669       busybox-mount
	4dce901fc8b9a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 minutes ago      Running             coredns                   2                   61c67e208f95d       coredns-66bc5c9577-4nfjt
	2237b45001e1c       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      11 minutes ago      Running             kindnet-cni               2                   541d3c4b50a45       kindnet-c84fl
	a70b43c634714       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      11 minutes ago      Running             kube-proxy                2                   30a817e495d15       kube-proxy-bfdlm
	376696c34ca30       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 minutes ago      Running             storage-provisioner       3                   b3b650a07f8d2       storage-provisioner
	6cc77c69d4b6b       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                      11 minutes ago      Running             kube-apiserver            0                   4f042f35cc7b1       kube-apiserver-functional-982703
	433baee4a6e87       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      11 minutes ago      Running             kube-scheduler            2                   387f0b53cd073       kube-scheduler-functional-982703
	ee5315cac0c16       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      11 minutes ago      Running             kube-controller-manager   2                   ea25526253ffd       kube-controller-manager-functional-982703
	48080d396cfa6       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      11 minutes ago      Running             etcd                      2                   76b101dbdd1fa       etcd-functional-982703
	db496053d539c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Exited              storage-provisioner       2                   b3b650a07f8d2       storage-provisioner
	016cad6cb4986       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      12 minutes ago      Exited              kube-scheduler            1                   387f0b53cd073       kube-scheduler-functional-982703
	e3387d94e21e5       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      12 minutes ago      Exited              etcd                      1                   76b101dbdd1fa       etcd-functional-982703
	655bb772c77f9       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      12 minutes ago      Exited              kube-controller-manager   1                   ea25526253ffd       kube-controller-manager-functional-982703
	88e8c46a970af       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      12 minutes ago      Exited              kube-proxy                1                   30a817e495d15       kube-proxy-bfdlm
	e6d4d2a05b147       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 minutes ago      Exited              coredns                   1                   61c67e208f95d       coredns-66bc5c9577-4nfjt
	203322c9e6099       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      12 minutes ago      Exited              kindnet-cni               1                   541d3c4b50a45       kindnet-c84fl
	
	
	==> coredns [4dce901fc8b9ae7b28abed9d5dc7ee69185491287c6795a1765827e63ffe6c48] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49772 - 51858 "HINFO IN 7281298086950625820.5304803197811442486. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.039932s
	
	
	==> coredns [e6d4d2a05b147621b00c4b5c735c4b838f8470e96d21727b361e8b9689df5993] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43757 - 57561 "HINFO IN 2495312604912868128.9219992335971469583. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.028825445s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-982703
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-982703
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a399eb27affc71ce2737faeeac659fc2ce938c64
	                    minikube.k8s.io/name=functional-982703
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T11_44_14_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 11:44:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-982703
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 11:57:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 11:57:03 +0000   Mon, 08 Sep 2025 11:44:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 11:57:03 +0000   Mon, 08 Sep 2025 11:44:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 11:57:03 +0000   Mon, 08 Sep 2025 11:44:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 11:57:03 +0000   Mon, 08 Sep 2025 11:45:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-982703
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a515ea4462b4b4881ee42fa20f9a53c
	  System UUID:                e36415ef-2d7e-47e9-9dd1-8ace78acee2b
	  Boot ID:                    1bb31c1a-3b78-4ea1-9977-d0689f279875
	  Kernel Version:             5.15.0-1083-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-hjrc4                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     hello-node-connect-7d85dfc575-w982h           0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  default                     mysql-5bb876957f-jptcl                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     11m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-4nfjt                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     13m
	  kube-system                 etcd-functional-982703                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         13m
	  kube-system                 kindnet-c84fl                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-functional-982703              250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-functional-982703     200m (2%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-bfdlm                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-functional-982703              100m (1%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-v9krs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-bq8vj         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 13m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 12m                kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node functional-982703 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 13m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node functional-982703 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x8 over 13m)  kubelet          Node functional-982703 status is now: NodeHasSufficientPID
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 13m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     13m                kubelet          Node functional-982703 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m                kubelet          Node functional-982703 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  13m                kubelet          Node functional-982703 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           13m                node-controller  Node functional-982703 event: Registered Node functional-982703 in Controller
	  Normal   NodeReady                12m                kubelet          Node functional-982703 status is now: NodeReady
	  Normal   RegisteredNode           12m                node-controller  Node functional-982703 event: Registered Node functional-982703 in Controller
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-982703 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-982703 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node functional-982703 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node functional-982703 event: Registered Node functional-982703 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: 56 2e be 2d 78 d9 92 4f 95 93 e9 d3 08 00
	[  +0.003955] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-f00e2e5550ba
	[  +0.000005] ll header: 00000000: 56 2e be 2d 78 d9 92 4f 95 93 e9 d3 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-f00e2e5550ba
	[  +0.000001] ll header: 00000000: 56 2e be 2d 78 d9 92 4f 95 93 e9 d3 08 00
	[  +8.187305] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-f00e2e5550ba
	[  +0.000030] ll header: 00000000: 56 2e be 2d 78 d9 92 4f 95 93 e9 d3 08 00
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-f00e2e5550ba
	[  +0.000006] ll header: 00000000: 56 2e be 2d 78 d9 92 4f 95 93 e9 d3 08 00
	[  +0.000008] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-f00e2e5550ba
	[  +0.000002] ll header: 00000000: 56 2e be 2d 78 d9 92 4f 95 93 e9 d3 08 00
	[Sep 8 11:36] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fe e6 1d 9e e2 09 2a b3 fe f6 b6 ae 08 00
	[  +1.022122] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fe e6 1d 9e e2 09 2a b3 fe f6 b6 ae 08 00
	[  +2.019826] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fe e6 1d 9e e2 09 2a b3 fe f6 b6 ae 08 00
	[  +4.219629] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fe e6 1d 9e e2 09 2a b3 fe f6 b6 ae 08 00
	[Sep 8 11:37] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: fe e6 1d 9e e2 09 2a b3 fe f6 b6 ae 08 00
	[ +16.130550] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fe e6 1d 9e e2 09 2a b3 fe f6 b6 ae 08 00
	[ +33.273137] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fe e6 1d 9e e2 09 2a b3 fe f6 b6 ae 08 00
	
	
	==> etcd [48080d396cfa6a0dac0c10db85d31d90273dd3c46b7b1ca63062718a84cc9060] <==
	{"level":"warn","ts":"2025-09-08T11:45:59.486089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.494099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.501959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.517248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.524709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.531823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.549977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.583359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.591243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.598207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.605908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.613005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.619748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.633415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.640052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.647566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.654606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.686239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.715518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.722259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.728709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.773178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44160","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-08T11:55:58.820866Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1073}
	{"level":"info","ts":"2025-09-08T11:55:58.841526Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1073,"took":"20.174566ms","hash":385303282,"current-db-size-bytes":3702784,"current-db-size":"3.7 MB","current-db-size-in-use-bytes":1843200,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-09-08T11:55:58.841597Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":385303282,"revision":1073,"compact-revision":-1}
	
	
	==> etcd [e3387d94e21e532324ce978c52b5d198ad7d23fe0a9995d18899c5c5f2505e25] <==
	{"level":"warn","ts":"2025-09-08T11:45:14.821317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:14.883471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:14.890380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:14.977383Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:14.979931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:14.987688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:15.082194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41030","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-08T11:45:39.449507Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-08T11:45:39.449623Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-982703","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-08T11:45:39.449730Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-08T11:45:39.450918Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-08T11:45:39.591785Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T11:45:39.591852Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-09-08T11:45:39.591901Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-08T11:45:39.591902Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-08T11:45:39.591981Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-08T11:45:39.591997Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-08T11:45:39.591896Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-08T11:45:39.592018Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-08T11:45:39.592024Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T11:45:39.591918Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-08T11:45:39.595215Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-08T11:45:39.595306Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T11:45:39.595339Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-08T11:45:39.595351Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-982703","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 11:57:36 up  2:40,  0 users,  load average: 0.16, 0.31, 1.35
	Linux functional-982703 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [203322c9e6099e8aa5d308a3966f409f94266a3886fea6b8f9b14bc0a38b6779] <==
	I0908 11:45:12.384792       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0908 11:45:12.385228       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0908 11:45:12.385457       1 main.go:148] setting mtu 1500 for CNI 
	I0908 11:45:12.385513       1 main.go:178] kindnetd IP family: "ipv4"
	I0908 11:45:12.385561       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-08T11:45:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	E0908 11:45:12.798076       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I0908 11:45:12.798558       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0908 11:45:12.799396       1 controller.go:381] "Waiting for informer caches to sync"
	I0908 11:45:12.799419       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	E0908 11:45:12.798959       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I0908 11:45:12.799524       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0908 11:45:12.799787       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E0908 11:45:12.877911       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E0908 11:45:16.280146       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E0908 11:45:16.280251       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E0908 11:45:16.280451       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"networkpolicies\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I0908 11:45:19.499723       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0908 11:45:19.499757       1 metrics.go:72] Registering metrics
	I0908 11:45:19.499826       1 controller.go:711] "Syncing nftables rules"
	I0908 11:45:22.799275       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:45:22.799337       1 main.go:301] handling current node
	I0908 11:45:32.798067       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:45:32.798113       1 main.go:301] handling current node
	
	
	==> kindnet [2237b45001e1cf7ff66bdaa188050f3d2093dd00a95c75df5a370b8145a728a9] <==
	I0908 11:55:31.686411       1 main.go:301] handling current node
	I0908 11:55:41.688413       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:55:41.688460       1 main.go:301] handling current node
	I0908 11:55:51.687516       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:55:51.687574       1 main.go:301] handling current node
	I0908 11:56:01.691804       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:56:01.691841       1 main.go:301] handling current node
	I0908 11:56:11.690561       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:56:11.690598       1 main.go:301] handling current node
	I0908 11:56:21.686278       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:56:21.686318       1 main.go:301] handling current node
	I0908 11:56:31.691758       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:56:31.691789       1 main.go:301] handling current node
	I0908 11:56:41.687594       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:56:41.687630       1 main.go:301] handling current node
	I0908 11:56:51.686337       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:56:51.686410       1 main.go:301] handling current node
	I0908 11:57:01.692201       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:57:01.692237       1 main.go:301] handling current node
	I0908 11:57:11.687784       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:57:11.687824       1 main.go:301] handling current node
	I0908 11:57:21.687185       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:57:21.687238       1 main.go:301] handling current node
	I0908 11:57:31.687847       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:57:31.687890       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6cc77c69d4b6b4a1fb131f9556f8c040fa5ebad2302358d862f48e91f8348ccd] <==
	I0908 11:46:24.577466       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.108.43.154"}
	I0908 11:46:28.210110       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.103.58.250"}
	I0908 11:46:31.360048       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.96.7.72"}
	I0908 11:47:03.048727       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:47:05.257706       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:48:18.974511       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:48:22.990666       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:49:35.781792       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:49:38.405030       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:50:55.332014       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:50:57.745882       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:51:55.445871       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:52:27.064939       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:52:36.108472       1 controller.go:667] quota admission added evaluator for: namespaces
	I0908 11:52:36.310226       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.26.124"}
	I0908 11:52:36.322366       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.230.11"}
	I0908 11:52:44.800776       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.111.101.239"}
	I0908 11:52:57.204070       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:53:45.294121       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:54:01.274684       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:55:09.484234       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:55:13.542141       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:56:00.385675       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0908 11:56:29.356008       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:56:32.348143       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [655bb772c77f9c54e947f894f2b5408e378a44a4ec5426abec99efd7315e4aba] <==
	I0908 11:45:19.602591       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0908 11:45:19.602630       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0908 11:45:19.602644       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0908 11:45:19.602654       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0908 11:45:19.603759       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 11:45:19.604845       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0908 11:45:19.604976       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 11:45:19.606970       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0908 11:45:19.609325       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0908 11:45:19.611687       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0908 11:45:19.613164       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0908 11:45:19.613359       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 11:45:19.619692       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 11:45:19.619725       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0908 11:45:19.619736       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0908 11:45:19.621797       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0908 11:45:19.645444       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0908 11:45:19.647951       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0908 11:45:19.647984       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0908 11:45:19.648143       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0908 11:45:19.653499       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0908 11:45:19.659980       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0908 11:45:19.668279       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0908 11:45:19.670525       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0908 11:45:19.673894       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	
	
	==> kube-controller-manager [ee5315cac0c163bfb95fa52279fcb70d4d6983114ee68b4a4c285321a03e006b] <==
	I0908 11:46:03.711071       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0908 11:46:03.711532       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0908 11:46:03.711075       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0908 11:46:03.711589       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0908 11:46:03.711602       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0908 11:46:03.712974       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0908 11:46:03.715437       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0908 11:46:03.715558       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0908 11:46:03.716764       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 11:46:03.718946       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0908 11:46:03.719061       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0908 11:46:03.722392       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0908 11:46:03.723692       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 11:46:03.723714       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0908 11:46:03.723720       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0908 11:46:03.725844       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0908 11:46:03.728563       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0908 11:46:03.730796       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0908 11:52:36.176241       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 11:52:36.182456       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 11:52:36.184874       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 11:52:36.188552       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 11:52:36.188697       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 11:52:36.193895       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 11:52:36.197365       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [88e8c46a970af2a0a8330bed0cb2cd07f68d7ba0a0261387a4c6ba49c8ec196f] <==
	I0908 11:45:12.692788       1 server_linux.go:53] "Using iptables proxy"
	I0908 11:45:13.178017       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0908 11:45:16.277983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"functional-982703\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I0908 11:45:17.781517       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 11:45:17.781562       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0908 11:45:17.781669       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 11:45:17.805175       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 11:45:17.805270       1 server_linux.go:132] "Using iptables Proxier"
	I0908 11:45:17.810169       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 11:45:17.810537       1 server.go:527] "Version info" version="v1.34.0"
	I0908 11:45:17.810568       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 11:45:17.811899       1 config.go:106] "Starting endpoint slice config controller"
	I0908 11:45:17.811925       1 config.go:309] "Starting node config controller"
	I0908 11:45:17.811936       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 11:45:17.811937       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 11:45:17.811973       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 11:45:17.812008       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 11:45:17.812016       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 11:45:17.812017       1 config.go:200] "Starting service config controller"
	I0908 11:45:17.812050       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 11:45:17.912766       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 11:45:17.912969       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 11:45:17.913077       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [a70b43c6347149dabe9d023569941ad662e5b3065328f59e8c6635e8711acb52] <==
	I0908 11:46:01.385955       1 server_linux.go:53] "Using iptables proxy"
	I0908 11:46:01.521481       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 11:46:01.622363       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 11:46:01.622406       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0908 11:46:01.622486       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 11:46:01.648521       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 11:46:01.648595       1 server_linux.go:132] "Using iptables Proxier"
	I0908 11:46:01.653541       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 11:46:01.653926       1 server.go:527] "Version info" version="v1.34.0"
	I0908 11:46:01.653964       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 11:46:01.655229       1 config.go:309] "Starting node config controller"
	I0908 11:46:01.655255       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 11:46:01.655266       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 11:46:01.655321       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 11:46:01.655327       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 11:46:01.655374       1 config.go:200] "Starting service config controller"
	I0908 11:46:01.655448       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 11:46:01.655502       1 config.go:106] "Starting endpoint slice config controller"
	I0908 11:46:01.655513       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 11:46:01.755629       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0908 11:46:01.755724       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 11:46:01.755724       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [016cad6cb498627f9e071ef394bbb7698274cffff4a692221c460fa767dc80d8] <==
	I0908 11:45:13.609784       1 serving.go:386] Generated self-signed cert in-memory
	I0908 11:45:16.678163       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0908 11:45:16.678197       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 11:45:16.683814       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0908 11:45:16.683944       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0908 11:45:16.683966       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0908 11:45:16.684785       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0908 11:45:16.685032       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 11:45:16.687311       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 11:45:16.685843       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 11:45:16.687436       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 11:45:16.784235       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0908 11:45:16.793604       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 11:45:16.793765       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 11:45:39.451239       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0908 11:45:39.451311       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I0908 11:45:39.451475       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 11:45:39.451427       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0908 11:45:39.451918       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0908 11:45:39.451958       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0908 11:45:39.452051       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [433baee4a6e87be57add9f7878b2bcc28c6a48f210f5f0aa04a7b4cc377162fb] <==
	I0908 11:45:58.528920       1 serving.go:386] Generated self-signed cert in-memory
	W0908 11:46:00.295224       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0908 11:46:00.295352       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0908 11:46:00.295395       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0908 11:46:00.295432       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0908 11:46:00.480791       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0908 11:46:00.480826       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 11:46:00.483963       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0908 11:46:00.484062       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 11:46:00.484165       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 11:46:00.484166       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0908 11:46:00.584429       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 08 11:56:58 functional-982703 kubelet[5250]: E0908 11:56:58.883487    5250 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="099fa2a8-93cb-41e5-bc04-b7283dd0c405"
	Sep 08 11:56:59 functional-982703 kubelet[5250]: E0908 11:56:59.884118    5250 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-jptcl" podUID="05dc921d-b1c5-4289-8b99-cf2bf22ea0a7"
	Sep 08 11:57:02 functional-982703 kubelet[5250]: E0908 11:57:02.883605    5250 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-bq8vj" podUID="558863ab-0ff0-4db1-9b14-23fbc3616293"
	Sep 08 11:57:05 functional-982703 kubelet[5250]: E0908 11:57:05.882468    5250 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-w982h" podUID="184c31ec-fa34-4258-ab22-9369d4cfbbd0"
	Sep 08 11:57:07 functional-982703 kubelet[5250]: E0908 11:57:07.196430    5250 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757332627196168594  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:200833}  inodes_used:{value:104}}"
	Sep 08 11:57:07 functional-982703 kubelet[5250]: E0908 11:57:07.196475    5250 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757332627196168594  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:200833}  inodes_used:{value:104}}"
	Sep 08 11:57:07 functional-982703 kubelet[5250]: E0908 11:57:07.882213    5250 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-hjrc4" podUID="af9a3701-91c8-45e0-a4ca-5f35d3e9c05a"
	Sep 08 11:57:09 functional-982703 kubelet[5250]: E0908 11:57:09.043912    5250 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 08 11:57:09 functional-982703 kubelet[5250]: E0908 11:57:09.043996    5250 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 08 11:57:09 functional-982703 kubelet[5250]: E0908 11:57:09.044217    5250 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-v9krs_kubernetes-dashboard(f7289fc4-893f-444e-a079-1924fc0e5d1d): ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 08 11:57:09 functional-982703 kubelet[5250]: E0908 11:57:09.044291    5250 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-v9krs" podUID="f7289fc4-893f-444e-a079-1924fc0e5d1d"
	Sep 08 11:57:09 functional-982703 kubelet[5250]: E0908 11:57:09.882592    5250 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="099fa2a8-93cb-41e5-bc04-b7283dd0c405"
	Sep 08 11:57:10 functional-982703 kubelet[5250]: E0908 11:57:10.883460    5250 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-jptcl" podUID="05dc921d-b1c5-4289-8b99-cf2bf22ea0a7"
	Sep 08 11:57:14 functional-982703 kubelet[5250]: E0908 11:57:14.885270    5250 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-bq8vj" podUID="558863ab-0ff0-4db1-9b14-23fbc3616293"
	Sep 08 11:57:17 functional-982703 kubelet[5250]: E0908 11:57:17.198038    5250 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757332637197792377  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:200833}  inodes_used:{value:104}}"
	Sep 08 11:57:17 functional-982703 kubelet[5250]: E0908 11:57:17.198082    5250 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757332637197792377  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:200833}  inodes_used:{value:104}}"
	Sep 08 11:57:19 functional-982703 kubelet[5250]: E0908 11:57:19.883340    5250 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-hjrc4" podUID="af9a3701-91c8-45e0-a4ca-5f35d3e9c05a"
	Sep 08 11:57:21 functional-982703 kubelet[5250]: E0908 11:57:21.882849    5250 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="099fa2a8-93cb-41e5-bc04-b7283dd0c405"
	Sep 08 11:57:23 functional-982703 kubelet[5250]: E0908 11:57:23.883762    5250 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-v9krs" podUID="f7289fc4-893f-444e-a079-1924fc0e5d1d"
	Sep 08 11:57:25 functional-982703 kubelet[5250]: E0908 11:57:25.883533    5250 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-jptcl" podUID="05dc921d-b1c5-4289-8b99-cf2bf22ea0a7"
	Sep 08 11:57:27 functional-982703 kubelet[5250]: E0908 11:57:27.199756    5250 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757332647199441928  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:200833}  inodes_used:{value:104}}"
	Sep 08 11:57:27 functional-982703 kubelet[5250]: E0908 11:57:27.199798    5250 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757332647199441928  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:200833}  inodes_used:{value:104}}"
	Sep 08 11:57:30 functional-982703 kubelet[5250]: E0908 11:57:30.883444    5250 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-hjrc4" podUID="af9a3701-91c8-45e0-a4ca-5f35d3e9c05a"
	Sep 08 11:57:34 functional-982703 kubelet[5250]: E0908 11:57:34.882916    5250 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="099fa2a8-93cb-41e5-bc04-b7283dd0c405"
	Sep 08 11:57:36 functional-982703 kubelet[5250]: E0908 11:57:36.884762    5250 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-v9krs" podUID="f7289fc4-893f-444e-a079-1924fc0e5d1d"
	
	
	==> storage-provisioner [376696c34ca30bbc2c9ba32382c554c4721ccf0e62f98b592421f2e54f245671] <==
	W0908 11:57:11.544667       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:57:13.548697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:57:13.554472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:57:15.557645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:57:15.561994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:57:17.565631       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:57:17.570283       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:57:19.573760       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:57:19.579957       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:57:21.583712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:57:21.588136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:57:23.591960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:57:23.597009       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:57:25.600356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:57:25.604644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:57:27.608264       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:57:27.613843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:57:29.617092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:57:29.621621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:57:31.625105       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:57:31.629862       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:57:33.633739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:57:33.638806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:57:35.642843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:57:35.647853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [db496053d539cb8b18db74e98dfad7b03bba337a9d25ae0ca7d6961f5f4adf7f] <==
	I0908 11:45:24.890428       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0908 11:45:24.898342       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0908 11:45:24.898395       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0908 11:45:24.900904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:45:28.356529       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:45:32.617207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:45:36.216423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:45:39.270880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-982703 -n functional-982703
helpers_test.go:269: (dbg) Run:  kubectl --context functional-982703 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-hjrc4 hello-node-connect-7d85dfc575-w982h mysql-5bb876957f-jptcl nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-v9krs kubernetes-dashboard-855c9754f9-bq8vj
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-982703 describe pod busybox-mount hello-node-75c85bcc94-hjrc4 hello-node-connect-7d85dfc575-w982h mysql-5bb876957f-jptcl nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-v9krs kubernetes-dashboard-855c9754f9-bq8vj
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-982703 describe pod busybox-mount hello-node-75c85bcc94-hjrc4 hello-node-connect-7d85dfc575-w982h mysql-5bb876957f-jptcl nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-v9krs kubernetes-dashboard-855c9754f9-bq8vj: exit status 1 (121.75967ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-982703/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 11:51:53 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://26a0c977230c065341b7f2a3d1081ddb31606fba545a47b7a30641c7f3d8fc73
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 08 Sep 2025 11:52:38 +0000
	      Finished:     Mon, 08 Sep 2025 11:52:38 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4pg5h (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-4pg5h:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m44s  default-scheduler  Successfully assigned default/busybox-mount to functional-982703
	  Normal  Pulling    5m44s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     4m59s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.354s (44.4s including waiting). Image size: 4631262 bytes.
	  Normal  Created    4m59s  kubelet            Created container: mount-munger
	  Normal  Started    4m59s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-hjrc4
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-982703/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 11:46:24 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rhf2d (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rhf2d:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  11m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-hjrc4 to functional-982703
	  Normal   Pulling    5m11s (x5 over 11m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     4m59s (x5 over 11m)   kubelet            Error: ErrImagePull
	  Warning  Failed     3m39s (x16 over 11m)  kubelet            Error: ImagePullBackOff
	  Warning  Failed     59s (x6 over 11m)     kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Normal   BackOff    45s (x23 over 11m)    kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-w982h
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-982703/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 11:52:44 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2mm5t (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-2mm5t:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  4m53s                default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-w982h to functional-982703
	  Warning  Failed     59s (x3 over 3m59s)  kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     59s (x3 over 3m59s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    32s (x4 over 3m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     32s (x4 over 3m58s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    18s (x4 over 4m52s)  kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             mysql-5bb876957f-jptcl
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-982703/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 11:46:31 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4nfdt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-4nfdt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  11m                  default-scheduler  Successfully assigned default/mysql-5bb876957f-jptcl to functional-982703
	  Warning  Failed     5m32s                kubelet            Failed to pull image "docker.io/mysql:5.7": loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    4m9s (x5 over 11m)   kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     2m29s (x4 over 10m)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m29s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Warning  Failed     76s (x16 over 10m)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    12s (x21 over 10m)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-982703/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 11:46:28 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9mcrq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-9mcrq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  11m                   default-scheduler  Successfully assigned default/nginx-svc to functional-982703
	  Normal   Pulling    4m39s (x5 over 11m)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     3m29s (x5 over 10m)   kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m29s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     2m14s (x16 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    58s (x22 over 10m)    kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-982703/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 11:46:31 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vksvd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-vksvd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  11m                   default-scheduler  Successfully assigned default/sp-pod to functional-982703
	  Normal   Pulling    3m37s (x5 over 11m)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     89s (x5 over 9m39s)   kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     89s (x5 over 9m39s)   kubelet            Error: ErrImagePull
	  Warning  Failed     16s (x16 over 9m38s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    3s (x17 over 9m38s)   kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-v9krs" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-bq8vj" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-982703 describe pod busybox-mount hello-node-75c85bcc94-hjrc4 hello-node-connect-7d85dfc575-w982h mysql-5bb876957f-jptcl nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-v9krs kubernetes-dashboard-855c9754f9-bq8vj: exit status 1
E0908 12:00:56.269030  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:02:19.335292  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- FAIL: TestFunctional/parallel/DashboardCmd (302.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-982703 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-982703 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-w982h" [184c31ec-fa34-4258-ab22-9369d4cfbbd0] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E0908 11:55:56.269586  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-982703 -n functional-982703
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-09-08 12:02:45.129592176 +0000 UTC m=+1784.422334646
functional_test.go:1645: (dbg) Run:  kubectl --context functional-982703 describe po hello-node-connect-7d85dfc575-w982h -n default
functional_test.go:1645: (dbg) kubectl --context functional-982703 describe po hello-node-connect-7d85dfc575-w982h -n default:
Name:             hello-node-connect-7d85dfc575-w982h
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-982703/192.168.49.2
Start Time:       Mon, 08 Sep 2025 11:52:44 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.11
IPs:
IP:           10.244.0.11
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ErrImagePull
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2mm5t (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-2mm5t:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-w982h to functional-982703
Normal   Pulling    3m39s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     3m6s (x5 over 9m7s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     3m6s (x5 over 9m7s)   kubelet            Error: ErrImagePull
Warning  Failed     110s (x16 over 9m6s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    43s (x21 over 9m6s)   kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-982703 logs hello-node-connect-7d85dfc575-w982h -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-982703 logs hello-node-connect-7d85dfc575-w982h -n default: exit status 1 (73.149582ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-w982h" is waiting to start: image can't be pulled

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-982703 logs hello-node-connect-7d85dfc575-w982h -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-982703 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-w982h
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-982703/192.168.49.2
Start Time:       Mon, 08 Sep 2025 11:52:44 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.11
IPs:
IP:           10.244.0.11
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ErrImagePull
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2mm5t (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-2mm5t:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-w982h to functional-982703
Normal   Pulling    3m39s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     3m6s (x5 over 9m7s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     3m6s (x5 over 9m7s)   kubelet            Error: ErrImagePull
Warning  Failed     110s (x16 over 9m6s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    43s (x21 over 9m6s)   kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-982703 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-982703 logs -l app=hello-node-connect: exit status 1 (64.045831ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-w982h" is waiting to start: image can't be pulled

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-982703 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-982703 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.111.101.239
IPs:                      10.111.101.239
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32432/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-982703
helpers_test.go:243: (dbg) docker inspect functional-982703:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "620b8d39c764727eda17f94afadd827084592cb86562df734fadfb4c869ae49d",
	        "Created": "2025-09-08T11:43:57.942130108Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 647387,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-08T11:43:57.979949835Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:863fa02c4a7dcd4571b30c16c1e6ae3eaa1d1a904931aac9472133ae3328c098",
	        "ResolvConfPath": "/var/lib/docker/containers/620b8d39c764727eda17f94afadd827084592cb86562df734fadfb4c869ae49d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/620b8d39c764727eda17f94afadd827084592cb86562df734fadfb4c869ae49d/hostname",
	        "HostsPath": "/var/lib/docker/containers/620b8d39c764727eda17f94afadd827084592cb86562df734fadfb4c869ae49d/hosts",
	        "LogPath": "/var/lib/docker/containers/620b8d39c764727eda17f94afadd827084592cb86562df734fadfb4c869ae49d/620b8d39c764727eda17f94afadd827084592cb86562df734fadfb4c869ae49d-json.log",
	        "Name": "/functional-982703",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-982703:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-982703",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "620b8d39c764727eda17f94afadd827084592cb86562df734fadfb4c869ae49d",
	                "LowerDir": "/var/lib/docker/overlay2/01e0ebe16f32e52d426b0fe3cff4efd0a586f9e9c8094b52873f4bcee3589eff-init/diff:/var/lib/docker/overlay2/e9bf8e09f770f60ed4c810e0202629dccf6a4c304822cce5a8dffd900ae12eff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/01e0ebe16f32e52d426b0fe3cff4efd0a586f9e9c8094b52873f4bcee3589eff/merged",
	                "UpperDir": "/var/lib/docker/overlay2/01e0ebe16f32e52d426b0fe3cff4efd0a586f9e9c8094b52873f4bcee3589eff/diff",
	                "WorkDir": "/var/lib/docker/overlay2/01e0ebe16f32e52d426b0fe3cff4efd0a586f9e9c8094b52873f4bcee3589eff/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-982703",
	                "Source": "/var/lib/docker/volumes/functional-982703/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-982703",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-982703",
	                "name.minikube.sigs.k8s.io": "functional-982703",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6902825b2bb740d3c0467667496661373e1d2904a6767d52684e83e116edad23",
	            "SandboxKey": "/var/run/docker/netns/6902825b2bb7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-982703": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:24:1a:fe:e6:95",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9662c4988a658ac6b4213aad8f235c9b158bb736b526a267a844b3d243ae232c",
	                    "EndpointID": "ecd42e44f56051e04408d02689c47c58881299836b72fa0b768c7a3a98b3eb81",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-982703",
	                        "620b8d39c764"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-982703 -n functional-982703
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-982703 logs -n 25: (1.439233189s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                       ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-982703 ssh -- ls -la /mount-9p                                                                         │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:52 UTC │ 08 Sep 25 11:52 UTC │
	│ ssh            │ functional-982703 ssh sudo umount -f /mount-9p                                                                    │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:52 UTC │                     │
	│ mount          │ -p functional-982703 /tmp/TestFunctionalparallelMountCmdVerifyCleanup366583128/001:/mount1 --alsologtostderr -v=1 │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:52 UTC │                     │
	│ ssh            │ functional-982703 ssh findmnt -T /mount1                                                                          │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:52 UTC │                     │
	│ mount          │ -p functional-982703 /tmp/TestFunctionalparallelMountCmdVerifyCleanup366583128/001:/mount2 --alsologtostderr -v=1 │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:52 UTC │                     │
	│ mount          │ -p functional-982703 /tmp/TestFunctionalparallelMountCmdVerifyCleanup366583128/001:/mount3 --alsologtostderr -v=1 │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:52 UTC │                     │
	│ ssh            │ functional-982703 ssh findmnt -T /mount1                                                                          │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:52 UTC │ 08 Sep 25 11:52 UTC │
	│ ssh            │ functional-982703 ssh findmnt -T /mount2                                                                          │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:52 UTC │ 08 Sep 25 11:52 UTC │
	│ ssh            │ functional-982703 ssh findmnt -T /mount3                                                                          │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:52 UTC │ 08 Sep 25 11:52 UTC │
	│ mount          │ -p functional-982703 --kill=true                                                                                  │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:52 UTC │                     │
	│ service        │ functional-982703 service list                                                                                    │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:56 UTC │ 08 Sep 25 11:56 UTC │
	│ service        │ functional-982703 service list -o json                                                                            │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:56 UTC │ 08 Sep 25 11:56 UTC │
	│ service        │ functional-982703 service --namespace=default --https --url hello-node                                            │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:56 UTC │                     │
	│ service        │ functional-982703 service hello-node --url --format={{.IP}}                                                       │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:56 UTC │                     │
	│ service        │ functional-982703 service hello-node --url                                                                        │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:56 UTC │                     │
	│ update-context │ functional-982703 update-context --alsologtostderr -v=2                                                           │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:56 UTC │ 08 Sep 25 11:56 UTC │
	│ update-context │ functional-982703 update-context --alsologtostderr -v=2                                                           │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:56 UTC │ 08 Sep 25 11:56 UTC │
	│ update-context │ functional-982703 update-context --alsologtostderr -v=2                                                           │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:56 UTC │ 08 Sep 25 11:56 UTC │
	│ image          │ functional-982703 image ls --format short --alsologtostderr                                                       │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:56 UTC │ 08 Sep 25 11:56 UTC │
	│ image          │ functional-982703 image ls --format yaml --alsologtostderr                                                        │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:56 UTC │ 08 Sep 25 11:56 UTC │
	│ ssh            │ functional-982703 ssh pgrep buildkitd                                                                             │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:56 UTC │                     │
	│ image          │ functional-982703 image ls --format json --alsologtostderr                                                        │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:56 UTC │ 08 Sep 25 11:56 UTC │
	│ image          │ functional-982703 image build -t localhost/my-image:functional-982703 testdata/build --alsologtostderr            │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:56 UTC │ 08 Sep 25 11:56 UTC │
	│ image          │ functional-982703 image ls --format table --alsologtostderr                                                       │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:56 UTC │ 08 Sep 25 11:56 UTC │
	│ image          │ functional-982703 image ls                                                                                        │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:56 UTC │ 08 Sep 25 11:56 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 11:52:35
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 11:52:35.101685  664079 out.go:360] Setting OutFile to fd 1 ...
	I0908 11:52:35.102075  664079 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:52:35.102106  664079 out.go:374] Setting ErrFile to fd 2...
	I0908 11:52:35.102114  664079 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:52:35.102913  664079 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-614854/.minikube/bin
	I0908 11:52:35.104207  664079 out.go:368] Setting JSON to false
	I0908 11:52:35.105302  664079 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":9299,"bootTime":1757323056,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 11:52:35.105427  664079 start.go:140] virtualization: kvm guest
	I0908 11:52:35.107319  664079 out.go:179] * [functional-982703] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	I0908 11:52:35.108875  664079 notify.go:220] Checking for updates...
	I0908 11:52:35.108926  664079 out.go:179]   - MINIKUBE_LOCATION=21512
	I0908 11:52:35.110462  664079 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 11:52:35.111752  664079 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21512-614854/kubeconfig
	I0908 11:52:35.112962  664079 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-614854/.minikube
	I0908 11:52:35.114299  664079 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 11:52:35.115722  664079 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 11:52:35.117669  664079 config.go:182] Loaded profile config "functional-982703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 11:52:35.118354  664079 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 11:52:35.142804  664079 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 11:52:35.142929  664079 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 11:52:35.194775  664079 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-09-08 11:52:35.185335554 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 11:52:35.194894  664079 docker.go:318] overlay module found
	I0908 11:52:35.196779  664079 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0908 11:52:35.198028  664079 start.go:304] selected driver: docker
	I0908 11:52:35.198047  664079 start.go:918] validating driver "docker" against &{Name:functional-982703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-982703 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 11:52:35.198174  664079 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 11:52:35.200642  664079 out.go:203] 
	W0908 11:52:35.202109  664079 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0908 11:52:35.203607  664079 out.go:203] 
	
	
	==> CRI-O <==
	Sep 08 12:02:09 functional-982703 crio[4885]: time="2025-09-08 12:02:09.883641582Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=11aa2b5b-882f-4c65-b8f1-5cd6a157da5b name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:02:09 functional-982703 crio[4885]: time="2025-09-08 12:02:09.883936172Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=11aa2b5b-882f-4c65-b8f1-5cd6a157da5b name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:02:11 functional-982703 crio[4885]: time="2025-09-08 12:02:11.882964078Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=a1e2ec0c-b1cd-49a9-bb34-5f98ed5ac82c name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:02:11 functional-982703 crio[4885]: time="2025-09-08 12:02:11.882960115Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=f1ef3940-7442-4b16-a62d-90405e46e9f2 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:02:11 functional-982703 crio[4885]: time="2025-09-08 12:02:11.883197332Z" level=info msg="Image docker.io/nginx:alpine not found" id=a1e2ec0c-b1cd-49a9-bb34-5f98ed5ac82c name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:02:11 functional-982703 crio[4885]: time="2025-09-08 12:02:11.883249288Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=f1ef3940-7442-4b16-a62d-90405e46e9f2 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:02:12 functional-982703 crio[4885]: time="2025-09-08 12:02:12.882555285Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=d1ee7666-844c-4e2a-bfd2-f81bd4abdfa8 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:02:12 functional-982703 crio[4885]: time="2025-09-08 12:02:12.882838907Z" level=info msg="Image docker.io/mysql:5.7 not found" id=d1ee7666-844c-4e2a-bfd2-f81bd4abdfa8 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:02:23 functional-982703 crio[4885]: time="2025-09-08 12:02:23.883509111Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=21abd448-9fe3-48aa-92b4-b92639a3a344 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:02:23 functional-982703 crio[4885]: time="2025-09-08 12:02:23.883762928Z" level=info msg="Image docker.io/mysql:5.7 not found" id=21abd448-9fe3-48aa-92b4-b92639a3a344 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:02:24 functional-982703 crio[4885]: time="2025-09-08 12:02:24.882808794Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=62dc5619-23d5-4a26-8426-46a7c1431d76 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:02:24 functional-982703 crio[4885]: time="2025-09-08 12:02:24.882809200Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=f95af788-cb9a-4c8c-add7-2af5da6810e4 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:02:24 functional-982703 crio[4885]: time="2025-09-08 12:02:24.883107950Z" level=info msg="Image docker.io/nginx:alpine not found" id=62dc5619-23d5-4a26-8426-46a7c1431d76 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:02:24 functional-982703 crio[4885]: time="2025-09-08 12:02:24.883168547Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=f95af788-cb9a-4c8c-add7-2af5da6810e4 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:02:25 functional-982703 crio[4885]: time="2025-09-08 12:02:25.883337126Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=d9b23260-a6c7-481c-b8b6-dbde2d2178be name=/runtime.v1.ImageService/PullImage
	Sep 08 12:02:26 functional-982703 crio[4885]: time="2025-09-08 12:02:26.883882716Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=8fe48e0b-6e9b-4380-ad24-1b51099a3232 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:02:26 functional-982703 crio[4885]: time="2025-09-08 12:02:26.884192421Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=8fe48e0b-6e9b-4380-ad24-1b51099a3232 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:02:34 functional-982703 crio[4885]: time="2025-09-08 12:02:34.883560806Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=575ccd36-3a2d-4682-8e1b-d92dbe8ecaf5 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:02:34 functional-982703 crio[4885]: time="2025-09-08 12:02:34.883912736Z" level=info msg="Image docker.io/mysql:5.7 not found" id=575ccd36-3a2d-4682-8e1b-d92dbe8ecaf5 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:02:35 functional-982703 crio[4885]: time="2025-09-08 12:02:35.882879515Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=020e3692-9f74-42a8-a67c-abb00ca0130c name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:02:35 functional-982703 crio[4885]: time="2025-09-08 12:02:35.883167024Z" level=info msg="Image docker.io/nginx:alpine not found" id=020e3692-9f74-42a8-a67c-abb00ca0130c name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:02:36 functional-982703 crio[4885]: time="2025-09-08 12:02:36.884222004Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=4ddf7329-ca82-499a-8194-169301937799 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:02:36 functional-982703 crio[4885]: time="2025-09-08 12:02:36.884552377Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=4ddf7329-ca82-499a-8194-169301937799 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:02:40 functional-982703 crio[4885]: time="2025-09-08 12:02:40.883624674Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=e15b40c7-2ed2-445d-a422-d27854153b44 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:02:40 functional-982703 crio[4885]: time="2025-09-08 12:02:40.884010332Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=e15b40c7-2ed2-445d-a422-d27854153b44 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	26a0c977230c0       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   10 minutes ago      Exited              mount-munger              0                   08c62cdff7669       busybox-mount
	4dce901fc8b9a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      16 minutes ago      Running             coredns                   2                   61c67e208f95d       coredns-66bc5c9577-4nfjt
	2237b45001e1c       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      16 minutes ago      Running             kindnet-cni               2                   541d3c4b50a45       kindnet-c84fl
	a70b43c634714       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      16 minutes ago      Running             kube-proxy                2                   30a817e495d15       kube-proxy-bfdlm
	376696c34ca30       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      16 minutes ago      Running             storage-provisioner       3                   b3b650a07f8d2       storage-provisioner
	6cc77c69d4b6b       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                      16 minutes ago      Running             kube-apiserver            0                   4f042f35cc7b1       kube-apiserver-functional-982703
	433baee4a6e87       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      16 minutes ago      Running             kube-scheduler            2                   387f0b53cd073       kube-scheduler-functional-982703
	ee5315cac0c16       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      16 minutes ago      Running             kube-controller-manager   2                   ea25526253ffd       kube-controller-manager-functional-982703
	48080d396cfa6       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      16 minutes ago      Running             etcd                      2                   76b101dbdd1fa       etcd-functional-982703
	db496053d539c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      17 minutes ago      Exited              storage-provisioner       2                   b3b650a07f8d2       storage-provisioner
	016cad6cb4986       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      17 minutes ago      Exited              kube-scheduler            1                   387f0b53cd073       kube-scheduler-functional-982703
	e3387d94e21e5       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      17 minutes ago      Exited              etcd                      1                   76b101dbdd1fa       etcd-functional-982703
	655bb772c77f9       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      17 minutes ago      Exited              kube-controller-manager   1                   ea25526253ffd       kube-controller-manager-functional-982703
	88e8c46a970af       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      17 minutes ago      Exited              kube-proxy                1                   30a817e495d15       kube-proxy-bfdlm
	e6d4d2a05b147       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      17 minutes ago      Exited              coredns                   1                   61c67e208f95d       coredns-66bc5c9577-4nfjt
	203322c9e6099       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      17 minutes ago      Exited              kindnet-cni               1                   541d3c4b50a45       kindnet-c84fl
	
	
	==> coredns [4dce901fc8b9ae7b28abed9d5dc7ee69185491287c6795a1765827e63ffe6c48] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49772 - 51858 "HINFO IN 7281298086950625820.5304803197811442486. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.039932s
	
	
	==> coredns [e6d4d2a05b147621b00c4b5c735c4b838f8470e96d21727b361e8b9689df5993] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43757 - 57561 "HINFO IN 2495312604912868128.9219992335971469583. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.028825445s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-982703
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-982703
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a399eb27affc71ce2737faeeac659fc2ce938c64
	                    minikube.k8s.io/name=functional-982703
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T11_44_14_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 11:44:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-982703
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 12:02:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 11:59:57 +0000   Mon, 08 Sep 2025 11:44:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 11:59:57 +0000   Mon, 08 Sep 2025 11:44:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 11:59:57 +0000   Mon, 08 Sep 2025 11:44:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 11:59:57 +0000   Mon, 08 Sep 2025 11:45:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-982703
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a515ea4462b4b4881ee42fa20f9a53c
	  System UUID:                e36415ef-2d7e-47e9-9dd1-8ace78acee2b
	  Boot ID:                    1bb31c1a-3b78-4ea1-9977-d0689f279875
	  Kernel Version:             5.15.0-1083-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-hjrc4                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  default                     hello-node-connect-7d85dfc575-w982h           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-jptcl                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     16m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 coredns-66bc5c9577-4nfjt                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     18m
	  kube-system                 etcd-functional-982703                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         18m
	  kube-system                 kindnet-c84fl                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      18m
	  kube-system                 kube-apiserver-functional-982703              250m (3%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-functional-982703     200m (2%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-bfdlm                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-scheduler-functional-982703              100m (1%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-v9krs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-bq8vj         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 18m                kube-proxy       
	  Normal   Starting                 16m                kube-proxy       
	  Normal   Starting                 17m                kube-proxy       
	  Normal   NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node functional-982703 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 18m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 18m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node functional-982703 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m (x8 over 18m)  kubelet          Node functional-982703 status is now: NodeHasSufficientPID
	  Normal   Starting                 18m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 18m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     18m                kubelet          Node functional-982703 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    18m                kubelet          Node functional-982703 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  18m                kubelet          Node functional-982703 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           18m                node-controller  Node functional-982703 event: Registered Node functional-982703 in Controller
	  Normal   NodeReady                17m                kubelet          Node functional-982703 status is now: NodeReady
	  Normal   RegisteredNode           17m                node-controller  Node functional-982703 event: Registered Node functional-982703 in Controller
	  Normal   Starting                 16m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 16m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node functional-982703 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node functional-982703 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16m (x8 over 16m)  kubelet          Node functional-982703 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           16m                node-controller  Node functional-982703 event: Registered Node functional-982703 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: 56 2e be 2d 78 d9 92 4f 95 93 e9 d3 08 00
	[  +0.003955] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-f00e2e5550ba
	[  +0.000005] ll header: 00000000: 56 2e be 2d 78 d9 92 4f 95 93 e9 d3 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-f00e2e5550ba
	[  +0.000001] ll header: 00000000: 56 2e be 2d 78 d9 92 4f 95 93 e9 d3 08 00
	[  +8.187305] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-f00e2e5550ba
	[  +0.000030] ll header: 00000000: 56 2e be 2d 78 d9 92 4f 95 93 e9 d3 08 00
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-f00e2e5550ba
	[  +0.000006] ll header: 00000000: 56 2e be 2d 78 d9 92 4f 95 93 e9 d3 08 00
	[  +0.000008] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-f00e2e5550ba
	[  +0.000002] ll header: 00000000: 56 2e be 2d 78 d9 92 4f 95 93 e9 d3 08 00
	[Sep 8 11:36] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fe e6 1d 9e e2 09 2a b3 fe f6 b6 ae 08 00
	[  +1.022122] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fe e6 1d 9e e2 09 2a b3 fe f6 b6 ae 08 00
	[  +2.019826] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fe e6 1d 9e e2 09 2a b3 fe f6 b6 ae 08 00
	[  +4.219629] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fe e6 1d 9e e2 09 2a b3 fe f6 b6 ae 08 00
	[Sep 8 11:37] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: fe e6 1d 9e e2 09 2a b3 fe f6 b6 ae 08 00
	[ +16.130550] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fe e6 1d 9e e2 09 2a b3 fe f6 b6 ae 08 00
	[ +33.273137] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fe e6 1d 9e e2 09 2a b3 fe f6 b6 ae 08 00
	
	
	==> etcd [48080d396cfa6a0dac0c10db85d31d90273dd3c46b7b1ca63062718a84cc9060] <==
	{"level":"warn","ts":"2025-09-08T11:45:59.517248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.524709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.531823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.549977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.583359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.591243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.598207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.605908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.613005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.619748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.633415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.640052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.647566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.654606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.686239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.715518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.722259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.728709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.773178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44160","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-08T11:55:58.820866Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1073}
	{"level":"info","ts":"2025-09-08T11:55:58.841526Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1073,"took":"20.174566ms","hash":385303282,"current-db-size-bytes":3702784,"current-db-size":"3.7 MB","current-db-size-in-use-bytes":1843200,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-09-08T11:55:58.841597Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":385303282,"revision":1073,"compact-revision":-1}
	{"level":"info","ts":"2025-09-08T12:00:58.826366Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1576}
	{"level":"info","ts":"2025-09-08T12:00:58.830295Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1576,"took":"3.442953ms","hash":3668449796,"current-db-size-bytes":3702784,"current-db-size":"3.7 MB","current-db-size-in-use-bytes":2715648,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2025-09-08T12:00:58.830350Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3668449796,"revision":1576,"compact-revision":1073}
	
	
	==> etcd [e3387d94e21e532324ce978c52b5d198ad7d23fe0a9995d18899c5c5f2505e25] <==
	{"level":"warn","ts":"2025-09-08T11:45:14.821317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:14.883471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:14.890380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:14.977383Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:14.979931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:14.987688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:15.082194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41030","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-08T11:45:39.449507Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-08T11:45:39.449623Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-982703","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-08T11:45:39.449730Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-08T11:45:39.450918Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-08T11:45:39.591785Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T11:45:39.591852Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-09-08T11:45:39.591901Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-08T11:45:39.591902Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-08T11:45:39.591981Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-08T11:45:39.591997Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-08T11:45:39.591896Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-08T11:45:39.592018Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-08T11:45:39.592024Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T11:45:39.591918Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-08T11:45:39.595215Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-08T11:45:39.595306Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T11:45:39.595339Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-08T11:45:39.595351Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-982703","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 12:02:46 up  2:45,  0 users,  load average: 0.03, 0.16, 0.98
	Linux functional-982703 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [203322c9e6099e8aa5d308a3966f409f94266a3886fea6b8f9b14bc0a38b6779] <==
	I0908 11:45:12.384792       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0908 11:45:12.385228       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0908 11:45:12.385457       1 main.go:148] setting mtu 1500 for CNI 
	I0908 11:45:12.385513       1 main.go:178] kindnetd IP family: "ipv4"
	I0908 11:45:12.385561       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-08T11:45:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	E0908 11:45:12.798076       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I0908 11:45:12.798558       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0908 11:45:12.799396       1 controller.go:381] "Waiting for informer caches to sync"
	I0908 11:45:12.799419       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	E0908 11:45:12.798959       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I0908 11:45:12.799524       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0908 11:45:12.799787       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E0908 11:45:12.877911       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E0908 11:45:16.280146       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E0908 11:45:16.280251       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E0908 11:45:16.280451       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"networkpolicies\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I0908 11:45:19.499723       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0908 11:45:19.499757       1 metrics.go:72] Registering metrics
	I0908 11:45:19.499826       1 controller.go:711] "Syncing nftables rules"
	I0908 11:45:22.799275       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:45:22.799337       1 main.go:301] handling current node
	I0908 11:45:32.798067       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:45:32.798113       1 main.go:301] handling current node
	
	
	==> kindnet [2237b45001e1cf7ff66bdaa188050f3d2093dd00a95c75df5a370b8145a728a9] <==
	I0908 12:00:41.687797       1 main.go:301] handling current node
	I0908 12:00:51.691750       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 12:00:51.691784       1 main.go:301] handling current node
	I0908 12:01:01.691741       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 12:01:01.691780       1 main.go:301] handling current node
	I0908 12:01:11.687796       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 12:01:11.687835       1 main.go:301] handling current node
	I0908 12:01:21.690796       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 12:01:21.690840       1 main.go:301] handling current node
	I0908 12:01:31.691589       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 12:01:31.691674       1 main.go:301] handling current node
	I0908 12:01:41.688640       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 12:01:41.688690       1 main.go:301] handling current node
	I0908 12:01:51.687838       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 12:01:51.687890       1 main.go:301] handling current node
	I0908 12:02:01.691767       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 12:02:01.691811       1 main.go:301] handling current node
	I0908 12:02:11.690877       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 12:02:11.690929       1 main.go:301] handling current node
	I0908 12:02:21.689585       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 12:02:21.689637       1 main.go:301] handling current node
	I0908 12:02:31.693088       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 12:02:31.693134       1 main.go:301] handling current node
	I0908 12:02:41.689575       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 12:02:41.689620       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6cc77c69d4b6b4a1fb131f9556f8c040fa5ebad2302358d862f48e91f8348ccd] <==
	I0908 11:50:57.745882       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:51:55.445871       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:52:27.064939       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:52:36.108472       1 controller.go:667] quota admission added evaluator for: namespaces
	I0908 11:52:36.310226       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.26.124"}
	I0908 11:52:36.322366       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.230.11"}
	I0908 11:52:44.800776       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.111.101.239"}
	I0908 11:52:57.204070       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:53:45.294121       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:54:01.274684       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:55:09.484234       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:55:13.542141       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:56:00.385675       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0908 11:56:29.356008       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:56:32.348143       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:57:37.848362       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:57:49.638359       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:58:55.558117       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:59:14.468398       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:59:58.908924       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:00:14.743953       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:01:17.437636       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:01:37.653167       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:02:35.538006       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:02:39.574711       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [655bb772c77f9c54e947f894f2b5408e378a44a4ec5426abec99efd7315e4aba] <==
	I0908 11:45:19.602591       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0908 11:45:19.602630       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0908 11:45:19.602644       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0908 11:45:19.602654       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0908 11:45:19.603759       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 11:45:19.604845       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0908 11:45:19.604976       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 11:45:19.606970       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0908 11:45:19.609325       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0908 11:45:19.611687       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0908 11:45:19.613164       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0908 11:45:19.613359       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 11:45:19.619692       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 11:45:19.619725       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0908 11:45:19.619736       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0908 11:45:19.621797       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0908 11:45:19.645444       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0908 11:45:19.647951       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0908 11:45:19.647984       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0908 11:45:19.648143       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0908 11:45:19.653499       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0908 11:45:19.659980       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0908 11:45:19.668279       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0908 11:45:19.670525       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0908 11:45:19.673894       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	
	
	==> kube-controller-manager [ee5315cac0c163bfb95fa52279fcb70d4d6983114ee68b4a4c285321a03e006b] <==
	I0908 11:46:03.711071       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0908 11:46:03.711532       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0908 11:46:03.711075       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0908 11:46:03.711589       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0908 11:46:03.711602       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0908 11:46:03.712974       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0908 11:46:03.715437       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0908 11:46:03.715558       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0908 11:46:03.716764       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 11:46:03.718946       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0908 11:46:03.719061       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0908 11:46:03.722392       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0908 11:46:03.723692       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 11:46:03.723714       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0908 11:46:03.723720       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0908 11:46:03.725844       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0908 11:46:03.728563       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0908 11:46:03.730796       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0908 11:52:36.176241       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 11:52:36.182456       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 11:52:36.184874       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 11:52:36.188552       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 11:52:36.188697       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 11:52:36.193895       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 11:52:36.197365       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [88e8c46a970af2a0a8330bed0cb2cd07f68d7ba0a0261387a4c6ba49c8ec196f] <==
	I0908 11:45:12.692788       1 server_linux.go:53] "Using iptables proxy"
	I0908 11:45:13.178017       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0908 11:45:16.277983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"functional-982703\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I0908 11:45:17.781517       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 11:45:17.781562       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0908 11:45:17.781669       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 11:45:17.805175       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 11:45:17.805270       1 server_linux.go:132] "Using iptables Proxier"
	I0908 11:45:17.810169       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 11:45:17.810537       1 server.go:527] "Version info" version="v1.34.0"
	I0908 11:45:17.810568       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 11:45:17.811899       1 config.go:106] "Starting endpoint slice config controller"
	I0908 11:45:17.811925       1 config.go:309] "Starting node config controller"
	I0908 11:45:17.811936       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 11:45:17.811937       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 11:45:17.811973       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 11:45:17.812008       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 11:45:17.812016       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 11:45:17.812017       1 config.go:200] "Starting service config controller"
	I0908 11:45:17.812050       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 11:45:17.912766       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 11:45:17.912969       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 11:45:17.913077       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [a70b43c6347149dabe9d023569941ad662e5b3065328f59e8c6635e8711acb52] <==
	I0908 11:46:01.385955       1 server_linux.go:53] "Using iptables proxy"
	I0908 11:46:01.521481       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 11:46:01.622363       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 11:46:01.622406       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0908 11:46:01.622486       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 11:46:01.648521       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 11:46:01.648595       1 server_linux.go:132] "Using iptables Proxier"
	I0908 11:46:01.653541       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 11:46:01.653926       1 server.go:527] "Version info" version="v1.34.0"
	I0908 11:46:01.653964       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 11:46:01.655229       1 config.go:309] "Starting node config controller"
	I0908 11:46:01.655255       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 11:46:01.655266       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 11:46:01.655321       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 11:46:01.655327       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 11:46:01.655374       1 config.go:200] "Starting service config controller"
	I0908 11:46:01.655448       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 11:46:01.655502       1 config.go:106] "Starting endpoint slice config controller"
	I0908 11:46:01.655513       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 11:46:01.755629       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0908 11:46:01.755724       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 11:46:01.755724       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [016cad6cb498627f9e071ef394bbb7698274cffff4a692221c460fa767dc80d8] <==
	I0908 11:45:13.609784       1 serving.go:386] Generated self-signed cert in-memory
	I0908 11:45:16.678163       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0908 11:45:16.678197       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 11:45:16.683814       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0908 11:45:16.683944       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0908 11:45:16.683966       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0908 11:45:16.684785       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0908 11:45:16.685032       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 11:45:16.687311       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 11:45:16.685843       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 11:45:16.687436       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 11:45:16.784235       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0908 11:45:16.793604       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 11:45:16.793765       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 11:45:39.451239       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0908 11:45:39.451311       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I0908 11:45:39.451475       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 11:45:39.451427       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0908 11:45:39.451918       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0908 11:45:39.451958       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0908 11:45:39.452051       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [433baee4a6e87be57add9f7878b2bcc28c6a48f210f5f0aa04a7b4cc377162fb] <==
	I0908 11:45:58.528920       1 serving.go:386] Generated self-signed cert in-memory
	W0908 11:46:00.295224       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0908 11:46:00.295352       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0908 11:46:00.295395       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0908 11:46:00.295432       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0908 11:46:00.480791       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0908 11:46:00.480826       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 11:46:00.483963       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0908 11:46:00.484062       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 11:46:00.484165       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 11:46:00.484166       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0908 11:46:00.584429       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 08 12:02:14 functional-982703 kubelet[5250]: E0908 12:02:14.882536    5250 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-w982h" podUID="184c31ec-fa34-4258-ab22-9369d4cfbbd0"
	Sep 08 12:02:17 functional-982703 kubelet[5250]: E0908 12:02:17.245688    5250 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757332937245453876  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:200833}  inodes_used:{value:104}}"
	Sep 08 12:02:17 functional-982703 kubelet[5250]: E0908 12:02:17.245743    5250 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757332937245453876  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:200833}  inodes_used:{value:104}}"
	Sep 08 12:02:17 functional-982703 kubelet[5250]: E0908 12:02:17.882808    5250 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-hjrc4" podUID="af9a3701-91c8-45e0-a4ca-5f35d3e9c05a"
	Sep 08 12:02:23 functional-982703 kubelet[5250]: E0908 12:02:23.884121    5250 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-jptcl" podUID="05dc921d-b1c5-4289-8b99-cf2bf22ea0a7"
	Sep 08 12:02:24 functional-982703 kubelet[5250]: E0908 12:02:24.883462    5250 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="5cb91085-c70e-4060-b183-9a58c2b44c2c"
	Sep 08 12:02:24 functional-982703 kubelet[5250]: E0908 12:02:24.883553    5250 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-bq8vj" podUID="558863ab-0ff0-4db1-9b14-23fbc3616293"
	Sep 08 12:02:25 functional-982703 kubelet[5250]: E0908 12:02:25.883758    5250 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = short-name \"kicbase/echo-server:latest\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\"" image="kicbase/echo-server:latest"
	Sep 08 12:02:25 functional-982703 kubelet[5250]: E0908 12:02:25.883812    5250 kuberuntime_image.go:43] "Failed to pull image" err="short-name \"kicbase/echo-server:latest\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\"" image="kicbase/echo-server:latest"
	Sep 08 12:02:25 functional-982703 kubelet[5250]: E0908 12:02:25.883893    5250 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-connect-7d85dfc575-w982h_default(184c31ec-fa34-4258-ab22-9369d4cfbbd0): ErrImagePull: short-name \"kicbase/echo-server:latest\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\"" logger="UnhandledError"
	Sep 08 12:02:25 functional-982703 kubelet[5250]: E0908 12:02:25.883926    5250 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-w982h" podUID="184c31ec-fa34-4258-ab22-9369d4cfbbd0"
	Sep 08 12:02:26 functional-982703 kubelet[5250]: E0908 12:02:26.883680    5250 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="099fa2a8-93cb-41e5-bc04-b7283dd0c405"
	Sep 08 12:02:26 functional-982703 kubelet[5250]: E0908 12:02:26.884458    5250 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-v9krs" podUID="f7289fc4-893f-444e-a079-1924fc0e5d1d"
	Sep 08 12:02:27 functional-982703 kubelet[5250]: E0908 12:02:27.247022    5250 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757332947246782858  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:200833}  inodes_used:{value:104}}"
	Sep 08 12:02:27 functional-982703 kubelet[5250]: E0908 12:02:27.247063    5250 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757332947246782858  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:200833}  inodes_used:{value:104}}"
	Sep 08 12:02:29 functional-982703 kubelet[5250]: E0908 12:02:29.882611    5250 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-hjrc4" podUID="af9a3701-91c8-45e0-a4ca-5f35d3e9c05a"
	Sep 08 12:02:34 functional-982703 kubelet[5250]: E0908 12:02:34.884226    5250 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-jptcl" podUID="05dc921d-b1c5-4289-8b99-cf2bf22ea0a7"
	Sep 08 12:02:35 functional-982703 kubelet[5250]: E0908 12:02:35.883611    5250 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="5cb91085-c70e-4060-b183-9a58c2b44c2c"
	Sep 08 12:02:36 functional-982703 kubelet[5250]: E0908 12:02:36.883861    5250 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-w982h" podUID="184c31ec-fa34-4258-ab22-9369d4cfbbd0"
	Sep 08 12:02:36 functional-982703 kubelet[5250]: E0908 12:02:36.884865    5250 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-bq8vj" podUID="558863ab-0ff0-4db1-9b14-23fbc3616293"
	Sep 08 12:02:37 functional-982703 kubelet[5250]: E0908 12:02:37.248475    5250 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757332957248238099  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:200833}  inodes_used:{value:104}}"
	Sep 08 12:02:37 functional-982703 kubelet[5250]: E0908 12:02:37.248514    5250 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757332957248238099  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:200833}  inodes_used:{value:104}}"
	Sep 08 12:02:38 functional-982703 kubelet[5250]: E0908 12:02:38.883140    5250 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="099fa2a8-93cb-41e5-bc04-b7283dd0c405"
	Sep 08 12:02:40 functional-982703 kubelet[5250]: E0908 12:02:40.884362    5250 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-v9krs" podUID="f7289fc4-893f-444e-a079-1924fc0e5d1d"
	Sep 08 12:02:43 functional-982703 kubelet[5250]: E0908 12:02:43.883173    5250 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-hjrc4" podUID="af9a3701-91c8-45e0-a4ca-5f35d3e9c05a"
	
	
	==> storage-provisioner [376696c34ca30bbc2c9ba32382c554c4721ccf0e62f98b592421f2e54f245671] <==
	W0908 12:02:22.846608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:02:24.850104       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:02:24.855457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:02:26.859003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:02:26.863307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:02:28.866303       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:02:28.870270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:02:30.873570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:02:30.877652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:02:32.881300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:02:32.885884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:02:34.889580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:02:34.895189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:02:36.898937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:02:36.902871       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:02:38.906522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:02:38.910644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:02:40.914073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:02:40.919349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:02:42.922871       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:02:42.928460       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:02:44.932170       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:02:44.936522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:02:46.939753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:02:46.943751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [db496053d539cb8b18db74e98dfad7b03bba337a9d25ae0ca7d6961f5f4adf7f] <==
	I0908 11:45:24.890428       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0908 11:45:24.898342       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0908 11:45:24.898395       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0908 11:45:24.900904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:45:28.356529       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:45:32.617207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:45:36.216423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:45:39.270880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-982703 -n functional-982703
helpers_test.go:269: (dbg) Run:  kubectl --context functional-982703 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-hjrc4 hello-node-connect-7d85dfc575-w982h mysql-5bb876957f-jptcl nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-v9krs kubernetes-dashboard-855c9754f9-bq8vj
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-982703 describe pod busybox-mount hello-node-75c85bcc94-hjrc4 hello-node-connect-7d85dfc575-w982h mysql-5bb876957f-jptcl nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-v9krs kubernetes-dashboard-855c9754f9-bq8vj
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-982703 describe pod busybox-mount hello-node-75c85bcc94-hjrc4 hello-node-connect-7d85dfc575-w982h mysql-5bb876957f-jptcl nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-v9krs kubernetes-dashboard-855c9754f9-bq8vj: exit status 1 (106.859871ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-982703/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 11:51:53 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://26a0c977230c065341b7f2a3d1081ddb31606fba545a47b7a30641c7f3d8fc73
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 08 Sep 2025 11:52:38 +0000
	      Finished:     Mon, 08 Sep 2025 11:52:38 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4pg5h (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-4pg5h:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-982703
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.354s (44.4s including waiting). Image size: 4631262 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-hjrc4
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-982703/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 11:46:24 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rhf2d (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rhf2d:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  16m                 default-scheduler  Successfully assigned default/hello-node-75c85bcc94-hjrc4 to functional-982703
	  Normal   Pulling    10m (x5 over 16m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     10m (x5 over 16m)   kubelet            Error: ErrImagePull
	  Warning  Failed     6m9s (x6 over 16m)  kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Normal   BackOff    82s (x44 over 16m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     82s (x44 over 16m)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-w982h
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-982703/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 11:52:44 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2mm5t (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-2mm5t:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-w982h to functional-982703
	  Normal   Pulling    3m41s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     3m8s (x5 over 9m9s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     3m8s (x5 over 9m9s)   kubelet            Error: ErrImagePull
	  Warning  Failed     112s (x16 over 9m8s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    45s (x21 over 9m8s)   kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             mysql-5bb876957f-jptcl
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-982703/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 11:46:31 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4nfdt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-4nfdt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  16m                  default-scheduler  Successfully assigned default/mysql-5bb876957f-jptcl to functional-982703
	  Warning  Failed     10m                  kubelet            Failed to pull image "docker.io/mysql:5.7": loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    9m19s (x5 over 16m)  kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     7m39s (x4 over 15m)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m39s (x5 over 15m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    73s (x33 over 15m)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     13s (x38 over 15m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-982703/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 11:46:28 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9mcrq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-9mcrq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  16m                  default-scheduler  Successfully assigned default/nginx-svc to functional-982703
	  Normal   Pulling    9m49s (x5 over 16m)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     8m39s (x5 over 15m)  kubelet            Error: ErrImagePull
	  Warning  Failed     5m8s (x6 over 15m)   kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    76s (x40 over 15m)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     47s (x42 over 15m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-982703/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 11:46:31 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vksvd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-vksvd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  16m                   default-scheduler  Successfully assigned default/sp-pod to functional-982703
	  Normal   Pulling    8m47s (x5 over 16m)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     6m39s (x5 over 14m)   kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     6m39s (x5 over 14m)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m45s (x19 over 14m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    70s (x31 over 14m)    kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-v9krs" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-bq8vj" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-982703 describe pod busybox-mount hello-node-75c85bcc94-hjrc4 hello-node-connect-7d85dfc575-w982h mysql-5bb876957f-jptcl nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-v9krs kubernetes-dashboard-855c9754f9-bq8vj: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.08s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (368.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [80a352bb-b59d-4ca3-8907-6788ab576ac0] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00407496s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-982703 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-982703 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-982703 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-982703 apply -f testdata/storage-provisioner/pod.yaml
I0908 11:46:31.883425  618620 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [099fa2a8-93cb-41e5-bc04-b7283dd0c405] Pending
helpers_test.go:352: "sp-pod" [099fa2a8-93cb-41e5-bc04-b7283dd0c405] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0908 11:46:37.248529  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:47:18.210035  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:48:40.131696  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-982703 -n functional-982703
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-09-08 11:52:32.197057386 +0000 UTC m=+1171.489799864
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-982703 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-982703 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-982703/192.168.49.2
Start Time:       Mon, 08 Sep 2025 11:46:31 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:  10.244.0.7
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vksvd (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-vksvd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  6m1s                 default-scheduler  Successfully assigned default/sp-pod to functional-982703
Warning  Failed     93s (x3 over 4m34s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     93s (x3 over 4m34s)  kubelet            Error: ErrImagePull
Normal   BackOff    55s (x5 over 4m33s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     55s (x5 over 4m33s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    42s (x4 over 6m)     kubelet            Pulling image "docker.io/nginx"
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-982703 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-982703 logs sp-pod -n default: exit status 1 (75.556316ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-982703 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-982703
helpers_test.go:243: (dbg) docker inspect functional-982703:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "620b8d39c764727eda17f94afadd827084592cb86562df734fadfb4c869ae49d",
	        "Created": "2025-09-08T11:43:57.942130108Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 647387,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-08T11:43:57.979949835Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:863fa02c4a7dcd4571b30c16c1e6ae3eaa1d1a904931aac9472133ae3328c098",
	        "ResolvConfPath": "/var/lib/docker/containers/620b8d39c764727eda17f94afadd827084592cb86562df734fadfb4c869ae49d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/620b8d39c764727eda17f94afadd827084592cb86562df734fadfb4c869ae49d/hostname",
	        "HostsPath": "/var/lib/docker/containers/620b8d39c764727eda17f94afadd827084592cb86562df734fadfb4c869ae49d/hosts",
	        "LogPath": "/var/lib/docker/containers/620b8d39c764727eda17f94afadd827084592cb86562df734fadfb4c869ae49d/620b8d39c764727eda17f94afadd827084592cb86562df734fadfb4c869ae49d-json.log",
	        "Name": "/functional-982703",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-982703:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-982703",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "620b8d39c764727eda17f94afadd827084592cb86562df734fadfb4c869ae49d",
	                "LowerDir": "/var/lib/docker/overlay2/01e0ebe16f32e52d426b0fe3cff4efd0a586f9e9c8094b52873f4bcee3589eff-init/diff:/var/lib/docker/overlay2/e9bf8e09f770f60ed4c810e0202629dccf6a4c304822cce5a8dffd900ae12eff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/01e0ebe16f32e52d426b0fe3cff4efd0a586f9e9c8094b52873f4bcee3589eff/merged",
	                "UpperDir": "/var/lib/docker/overlay2/01e0ebe16f32e52d426b0fe3cff4efd0a586f9e9c8094b52873f4bcee3589eff/diff",
	                "WorkDir": "/var/lib/docker/overlay2/01e0ebe16f32e52d426b0fe3cff4efd0a586f9e9c8094b52873f4bcee3589eff/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-982703",
	                "Source": "/var/lib/docker/volumes/functional-982703/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-982703",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-982703",
	                "name.minikube.sigs.k8s.io": "functional-982703",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6902825b2bb740d3c0467667496661373e1d2904a6767d52684e83e116edad23",
	            "SandboxKey": "/var/run/docker/netns/6902825b2bb7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-982703": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:24:1a:fe:e6:95",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9662c4988a658ac6b4213aad8f235c9b158bb736b526a267a844b3d243ae232c",
	                    "EndpointID": "ecd42e44f56051e04408d02689c47c58881299836b72fa0b768c7a3a98b3eb81",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-982703",
	                        "620b8d39c764"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-982703 -n functional-982703
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-982703 logs -n 25: (1.533532557s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-982703 ssh sudo cat /usr/share/ca-certificates/6186202.pem                                                                                           │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:46 UTC │ 08 Sep 25 11:46 UTC │
	│ image   │ functional-982703 image ls                                                                                                                                      │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:46 UTC │ 08 Sep 25 11:46 UTC │
	│ ssh     │ functional-982703 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:46 UTC │ 08 Sep 25 11:46 UTC │
	│ image   │ functional-982703 image load --daemon kicbase/echo-server:functional-982703 --alsologtostderr                                                                   │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:46 UTC │ 08 Sep 25 11:46 UTC │
	│ ssh     │ functional-982703 ssh echo hello                                                                                                                                │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:46 UTC │ 08 Sep 25 11:46 UTC │
	│ ssh     │ functional-982703 ssh cat /etc/hostname                                                                                                                         │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:46 UTC │ 08 Sep 25 11:46 UTC │
	│ image   │ functional-982703 image ls                                                                                                                                      │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:46 UTC │ 08 Sep 25 11:46 UTC │
	│ tunnel  │ functional-982703 tunnel --alsologtostderr                                                                                                                      │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:46 UTC │                     │
	│ tunnel  │ functional-982703 tunnel --alsologtostderr                                                                                                                      │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:46 UTC │                     │
	│ image   │ functional-982703 image load --daemon kicbase/echo-server:functional-982703 --alsologtostderr                                                                   │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:46 UTC │ 08 Sep 25 11:46 UTC │
	│ tunnel  │ functional-982703 tunnel --alsologtostderr                                                                                                                      │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:46 UTC │                     │
	│ image   │ functional-982703 image ls                                                                                                                                      │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:46 UTC │ 08 Sep 25 11:46 UTC │
	│ image   │ functional-982703 image save kicbase/echo-server:functional-982703 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:46 UTC │ 08 Sep 25 11:46 UTC │
	│ image   │ functional-982703 image rm kicbase/echo-server:functional-982703 --alsologtostderr                                                                              │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:46 UTC │ 08 Sep 25 11:46 UTC │
	│ image   │ functional-982703 image ls                                                                                                                                      │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:46 UTC │ 08 Sep 25 11:46 UTC │
	│ image   │ functional-982703 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:46 UTC │ 08 Sep 25 11:46 UTC │
	│ image   │ functional-982703 image ls                                                                                                                                      │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:46 UTC │ 08 Sep 25 11:46 UTC │
	│ image   │ functional-982703 image save --daemon kicbase/echo-server:functional-982703 --alsologtostderr                                                                   │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:46 UTC │ 08 Sep 25 11:46 UTC │
	│ addons  │ functional-982703 addons list                                                                                                                                   │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:51 UTC │ 08 Sep 25 11:51 UTC │
	│ addons  │ functional-982703 addons list -o json                                                                                                                           │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:51 UTC │ 08 Sep 25 11:51 UTC │
	│ ssh     │ functional-982703 ssh findmnt -T /mount-9p | grep 9p                                                                                                            │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:51 UTC │                     │
	│ mount   │ -p functional-982703 /tmp/TestFunctionalparallelMountCmdany-port3842913867/001:/mount-9p --alsologtostderr -v=1                                                 │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:51 UTC │                     │
	│ ssh     │ functional-982703 ssh findmnt -T /mount-9p | grep 9p                                                                                                            │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:51 UTC │ 08 Sep 25 11:51 UTC │
	│ ssh     │ functional-982703 ssh -- ls -la /mount-9p                                                                                                                       │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:51 UTC │ 08 Sep 25 11:51 UTC │
	│ ssh     │ functional-982703 ssh cat /mount-9p/test-1757332311617822979                                                                                                    │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:51 UTC │ 08 Sep 25 11:51 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 11:45:38
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 11:45:38.155951  653679 out.go:360] Setting OutFile to fd 1 ...
	I0908 11:45:38.156249  653679 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:45:38.156253  653679 out.go:374] Setting ErrFile to fd 2...
	I0908 11:45:38.156257  653679 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:45:38.156507  653679 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-614854/.minikube/bin
	I0908 11:45:38.157146  653679 out.go:368] Setting JSON to false
	I0908 11:45:38.158261  653679 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":8882,"bootTime":1757323056,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 11:45:38.158328  653679 start.go:140] virtualization: kvm guest
	I0908 11:45:38.160742  653679 out.go:179] * [functional-982703] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 11:45:38.162184  653679 out.go:179]   - MINIKUBE_LOCATION=21512
	I0908 11:45:38.162207  653679 notify.go:220] Checking for updates...
	I0908 11:45:38.165116  653679 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 11:45:38.166616  653679 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21512-614854/kubeconfig
	I0908 11:45:38.168297  653679 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-614854/.minikube
	I0908 11:45:38.169739  653679 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 11:45:38.171254  653679 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 11:45:38.173232  653679 config.go:182] Loaded profile config "functional-982703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 11:45:38.173383  653679 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 11:45:38.198109  653679 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 11:45:38.198247  653679 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 11:45:38.251077  653679 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:66 SystemTime:2025-09-08 11:45:38.240773926 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 11:45:38.251230  653679 docker.go:318] overlay module found
	I0908 11:45:38.253160  653679 out.go:179] * Using the docker driver based on existing profile
	I0908 11:45:38.254581  653679 start.go:304] selected driver: docker
	I0908 11:45:38.254590  653679 start.go:918] validating driver "docker" against &{Name:functional-982703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-982703 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 11:45:38.254677  653679 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 11:45:38.254775  653679 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 11:45:38.304452  653679 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:66 SystemTime:2025-09-08 11:45:38.294874392 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 11:45:38.305186  653679 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 11:45:38.305219  653679 cni.go:84] Creating CNI manager for ""
	I0908 11:45:38.305271  653679 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 11:45:38.305322  653679 start.go:348] cluster config:
	{Name:functional-982703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-982703 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 11:45:38.307848  653679 out.go:179] * Starting "functional-982703" primary control-plane node in "functional-982703" cluster
	I0908 11:45:38.309039  653679 cache.go:123] Beginning downloading kic base image for docker with crio
	I0908 11:45:38.310287  653679 out.go:179] * Pulling base image v0.0.47-1756980985-21488 ...
	I0908 11:45:38.311432  653679 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 11:45:38.311476  653679 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21512-614854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0908 11:45:38.311483  653679 cache.go:58] Caching tarball of preloaded images
	I0908 11:45:38.311534  653679 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 11:45:38.311603  653679 preload.go:172] Found /home/jenkins/minikube-integration/21512-614854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0908 11:45:38.311613  653679 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0908 11:45:38.311782  653679 profile.go:143] Saving config to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/functional-982703/config.json ...
	I0908 11:45:38.332549  653679 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon, skipping pull
	I0908 11:45:38.332560  653679 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in daemon, skipping load
	I0908 11:45:38.332577  653679 cache.go:232] Successfully downloaded all kic artifacts
	I0908 11:45:38.332604  653679 start.go:360] acquireMachinesLock for functional-982703: {Name:mk61640b73a8731c9fb60f70b6a30ab59656ce6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 11:45:38.332694  653679 start.go:364] duration metric: took 42.647µs to acquireMachinesLock for "functional-982703"
	I0908 11:45:38.332713  653679 start.go:96] Skipping create...Using existing machine configuration
	I0908 11:45:38.332717  653679 fix.go:54] fixHost starting: 
	I0908 11:45:38.332922  653679 cli_runner.go:164] Run: docker container inspect functional-982703 --format={{.State.Status}}
	I0908 11:45:38.350801  653679 fix.go:112] recreateIfNeeded on functional-982703: state=Running err=<nil>
	W0908 11:45:38.350848  653679 fix.go:138] unexpected machine state, will restart: <nil>
	I0908 11:45:38.352627  653679 out.go:252] * Updating the running docker "functional-982703" container ...
	I0908 11:45:38.352649  653679 machine.go:93] provisionDockerMachine start ...
	I0908 11:45:38.352752  653679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-982703
	I0908 11:45:38.370935  653679 main.go:141] libmachine: Using SSH client type: native
	I0908 11:45:38.371204  653679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I0908 11:45:38.371211  653679 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 11:45:38.491546  653679 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-982703
	
	I0908 11:45:38.491573  653679 ubuntu.go:182] provisioning hostname "functional-982703"
	I0908 11:45:38.491644  653679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-982703
	I0908 11:45:38.510463  653679 main.go:141] libmachine: Using SSH client type: native
	I0908 11:45:38.510686  653679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I0908 11:45:38.510695  653679 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-982703 && echo "functional-982703" | sudo tee /etc/hostname
	I0908 11:45:38.643939  653679 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-982703
	
	I0908 11:45:38.644024  653679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-982703
	I0908 11:45:38.662961  653679 main.go:141] libmachine: Using SSH client type: native
	I0908 11:45:38.663244  653679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I0908 11:45:38.663256  653679 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-982703' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-982703/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-982703' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 11:45:38.784858  653679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 11:45:38.784879  653679 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21512-614854/.minikube CaCertPath:/home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21512-614854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21512-614854/.minikube}
	I0908 11:45:38.784897  653679 ubuntu.go:190] setting up certificates
	I0908 11:45:38.784909  653679 provision.go:84] configureAuth start
	I0908 11:45:38.784963  653679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-982703
	I0908 11:45:38.804938  653679 provision.go:143] copyHostCerts
	I0908 11:45:38.805007  653679 exec_runner.go:144] found /home/jenkins/minikube-integration/21512-614854/.minikube/ca.pem, removing ...
	I0908 11:45:38.805021  653679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21512-614854/.minikube/ca.pem
	I0908 11:45:38.805098  653679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21512-614854/.minikube/ca.pem (1082 bytes)
	I0908 11:45:38.805217  653679 exec_runner.go:144] found /home/jenkins/minikube-integration/21512-614854/.minikube/cert.pem, removing ...
	I0908 11:45:38.805222  653679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21512-614854/.minikube/cert.pem
	I0908 11:45:38.805246  653679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21512-614854/.minikube/cert.pem (1123 bytes)
	I0908 11:45:38.805296  653679 exec_runner.go:144] found /home/jenkins/minikube-integration/21512-614854/.minikube/key.pem, removing ...
	I0908 11:45:38.805298  653679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21512-614854/.minikube/key.pem
	I0908 11:45:38.805318  653679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21512-614854/.minikube/key.pem (1675 bytes)
	I0908 11:45:38.805362  653679 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21512-614854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca-key.pem org=jenkins.functional-982703 san=[127.0.0.1 192.168.49.2 functional-982703 localhost minikube]
	I0908 11:45:39.098271  653679 provision.go:177] copyRemoteCerts
	I0908 11:45:39.098332  653679 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 11:45:39.098369  653679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-982703
	I0908 11:45:39.116732  653679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/functional-982703/id_rsa Username:docker}
	I0908 11:45:39.205946  653679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 11:45:39.231039  653679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0908 11:45:39.256736  653679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0908 11:45:39.283873  653679 provision.go:87] duration metric: took 498.943998ms to configureAuth
	I0908 11:45:39.283899  653679 ubuntu.go:206] setting minikube options for container-runtime
	I0908 11:45:39.284083  653679 config.go:182] Loaded profile config "functional-982703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 11:45:39.284182  653679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-982703
	I0908 11:45:39.302848  653679 main.go:141] libmachine: Using SSH client type: native
	I0908 11:45:39.303155  653679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I0908 11:45:39.303175  653679 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0908 11:45:44.690263  653679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0908 11:45:44.690283  653679 machine.go:96] duration metric: took 6.337626648s to provisionDockerMachine
	I0908 11:45:44.690296  653679 start.go:293] postStartSetup for "functional-982703" (driver="docker")
	I0908 11:45:44.690307  653679 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 11:45:44.690403  653679 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 11:45:44.690439  653679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-982703
	I0908 11:45:44.709351  653679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/functional-982703/id_rsa Username:docker}
	I0908 11:45:44.805384  653679 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 11:45:44.808975  653679 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0908 11:45:44.809001  653679 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0908 11:45:44.809041  653679 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0908 11:45:44.809050  653679 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0908 11:45:44.809062  653679 filesync.go:126] Scanning /home/jenkins/minikube-integration/21512-614854/.minikube/addons for local assets ...
	I0908 11:45:44.809124  653679 filesync.go:126] Scanning /home/jenkins/minikube-integration/21512-614854/.minikube/files for local assets ...
	I0908 11:45:44.809230  653679 filesync.go:149] local asset: /home/jenkins/minikube-integration/21512-614854/.minikube/files/etc/ssl/certs/6186202.pem -> 6186202.pem in /etc/ssl/certs
	I0908 11:45:44.809426  653679 filesync.go:149] local asset: /home/jenkins/minikube-integration/21512-614854/.minikube/files/etc/test/nested/copy/618620/hosts -> hosts in /etc/test/nested/copy/618620
	I0908 11:45:44.809489  653679 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/618620
	I0908 11:45:44.818948  653679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/files/etc/ssl/certs/6186202.pem --> /etc/ssl/certs/6186202.pem (1708 bytes)
	I0908 11:45:44.845052  653679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/files/etc/test/nested/copy/618620/hosts --> /etc/test/nested/copy/618620/hosts (40 bytes)
	I0908 11:45:44.870616  653679 start.go:296] duration metric: took 180.303583ms for postStartSetup
	I0908 11:45:44.870689  653679 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 11:45:44.870728  653679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-982703
	I0908 11:45:44.889560  653679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/functional-982703/id_rsa Username:docker}
	I0908 11:45:44.977236  653679 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0908 11:45:44.982307  653679 fix.go:56] duration metric: took 6.649579507s for fixHost
	I0908 11:45:44.982328  653679 start.go:83] releasing machines lock for "functional-982703", held for 6.649626041s
	I0908 11:45:44.982408  653679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-982703
	I0908 11:45:45.000783  653679 ssh_runner.go:195] Run: cat /version.json
	I0908 11:45:45.000824  653679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-982703
	I0908 11:45:45.000832  653679 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 11:45:45.000895  653679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-982703
	I0908 11:45:45.020202  653679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/functional-982703/id_rsa Username:docker}
	I0908 11:45:45.020207  653679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/functional-982703/id_rsa Username:docker}
	I0908 11:45:45.104146  653679 ssh_runner.go:195] Run: systemctl --version
	I0908 11:45:45.179889  653679 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0908 11:45:45.322418  653679 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0908 11:45:45.327236  653679 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 11:45:45.337113  653679 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0908 11:45:45.337180  653679 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 11:45:45.346935  653679 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0908 11:45:45.346950  653679 start.go:495] detecting cgroup driver to use...
	I0908 11:45:45.346987  653679 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0908 11:45:45.347027  653679 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 11:45:45.360433  653679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 11:45:45.373045  653679 docker.go:218] disabling cri-docker service (if available) ...
	I0908 11:45:45.373108  653679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 11:45:45.387910  653679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 11:45:45.401688  653679 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 11:45:45.519451  653679 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 11:45:45.644468  653679 docker.go:234] disabling docker service ...
	I0908 11:45:45.644533  653679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 11:45:45.658358  653679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 11:45:45.670903  653679 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 11:45:45.790568  653679 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 11:45:45.913483  653679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 11:45:45.926139  653679 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 11:45:45.944552  653679 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0908 11:45:45.944604  653679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:45:45.955825  653679 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0908 11:45:45.955875  653679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:45:45.967344  653679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:45:45.979679  653679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:45:45.991410  653679 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 11:45:46.001988  653679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:45:46.013384  653679 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:45:46.023790  653679 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 11:45:46.034665  653679 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 11:45:46.044212  653679 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 11:45:46.053228  653679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:45:46.165722  653679 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0908 11:45:54.119567  653679 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.953810971s)
	I0908 11:45:54.119590  653679 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0908 11:45:54.119642  653679 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0908 11:45:54.123680  653679 start.go:563] Will wait 60s for crictl version
	I0908 11:45:54.123735  653679 ssh_runner.go:195] Run: which crictl
	I0908 11:45:54.127459  653679 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 11:45:54.163830  653679 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0908 11:45:54.163904  653679 ssh_runner.go:195] Run: crio --version
	I0908 11:45:54.201384  653679 ssh_runner.go:195] Run: crio --version
	I0908 11:45:54.244979  653679 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0908 11:45:54.247036  653679 cli_runner.go:164] Run: docker network inspect functional-982703 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 11:45:54.266092  653679 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0908 11:45:54.272961  653679 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0908 11:45:54.274845  653679 kubeadm.go:875] updating cluster {Name:functional-982703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-982703 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 11:45:54.275021  653679 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 11:45:54.275101  653679 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 11:45:54.323109  653679 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 11:45:54.323122  653679 crio.go:433] Images already preloaded, skipping extraction
	I0908 11:45:54.323174  653679 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 11:45:54.360321  653679 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 11:45:54.360336  653679 cache_images.go:85] Images are preloaded, skipping loading
	I0908 11:45:54.360343  653679 kubeadm.go:926] updating node { 192.168.49.2 8441 v1.34.0 crio true true} ...
	I0908 11:45:54.360458  653679 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-982703 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:functional-982703 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 11:45:54.360528  653679 ssh_runner.go:195] Run: crio config
	I0908 11:45:54.409619  653679 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0908 11:45:54.409689  653679 cni.go:84] Creating CNI manager for ""
	I0908 11:45:54.409696  653679 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 11:45:54.409705  653679 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 11:45:54.409725  653679 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-982703 NodeName:functional-982703 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 11:45:54.409854  653679 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-982703"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 11:45:54.409916  653679 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 11:45:54.419721  653679 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 11:45:54.419780  653679 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 11:45:54.429105  653679 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0908 11:45:54.448468  653679 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 11:45:54.467762  653679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I0908 11:45:54.486934  653679 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0908 11:45:54.490977  653679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 11:45:54.610912  653679 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 11:45:54.623292  653679 certs.go:68] Setting up /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/functional-982703 for IP: 192.168.49.2
	I0908 11:45:54.623307  653679 certs.go:194] generating shared ca certs ...
	I0908 11:45:54.623321  653679 certs.go:226] acquiring lock for ca certs: {Name:mkfa58237bde04d2b800b1fdade18ddbc226533f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:45:54.623488  653679 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21512-614854/.minikube/ca.key
	I0908 11:45:54.623519  653679 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21512-614854/.minikube/proxy-client-ca.key
	I0908 11:45:54.623525  653679 certs.go:256] generating profile certs ...
	I0908 11:45:54.623671  653679 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/functional-982703/client.key
	I0908 11:45:54.623723  653679 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/functional-982703/apiserver.key.97f87b1c
	I0908 11:45:54.623756  653679 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/functional-982703/proxy-client.key
	I0908 11:45:54.623861  653679 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/618620.pem (1338 bytes)
	W0908 11:45:54.623885  653679 certs.go:480] ignoring /home/jenkins/minikube-integration/21512-614854/.minikube/certs/618620_empty.pem, impossibly tiny 0 bytes
	I0908 11:45:54.623892  653679 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca-key.pem (1675 bytes)
	I0908 11:45:54.623912  653679 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem (1082 bytes)
	I0908 11:45:54.623932  653679 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/cert.pem (1123 bytes)
	I0908 11:45:54.623949  653679 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/key.pem (1675 bytes)
	I0908 11:45:54.623983  653679 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/files/etc/ssl/certs/6186202.pem (1708 bytes)
	I0908 11:45:54.624550  653679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 11:45:54.649550  653679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0908 11:45:54.674870  653679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 11:45:54.699869  653679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0908 11:45:54.726248  653679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/functional-982703/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0908 11:45:54.752155  653679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/functional-982703/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0908 11:45:54.778092  653679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/functional-982703/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 11:45:54.804184  653679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/functional-982703/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0908 11:45:54.830401  653679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/files/etc/ssl/certs/6186202.pem --> /usr/share/ca-certificates/6186202.pem (1708 bytes)
	I0908 11:45:54.857294  653679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 11:45:54.883416  653679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/certs/618620.pem --> /usr/share/ca-certificates/618620.pem (1338 bytes)
	I0908 11:45:54.908694  653679 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 11:45:54.927871  653679 ssh_runner.go:195] Run: openssl version
	I0908 11:45:54.933899  653679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6186202.pem && ln -fs /usr/share/ca-certificates/6186202.pem /etc/ssl/certs/6186202.pem"
	I0908 11:45:54.944832  653679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6186202.pem
	I0908 11:45:54.949275  653679 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 11:43 /usr/share/ca-certificates/6186202.pem
	I0908 11:45:54.949363  653679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6186202.pem
	I0908 11:45:54.956834  653679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6186202.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 11:45:54.966987  653679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 11:45:54.977951  653679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:45:54.982480  653679 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 11:33 /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:45:54.982539  653679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 11:45:54.989895  653679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 11:45:54.999920  653679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/618620.pem && ln -fs /usr/share/ca-certificates/618620.pem /etc/ssl/certs/618620.pem"
	I0908 11:45:55.010184  653679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/618620.pem
	I0908 11:45:55.014021  653679 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 11:43 /usr/share/ca-certificates/618620.pem
	I0908 11:45:55.014073  653679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/618620.pem
	I0908 11:45:55.021436  653679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/618620.pem /etc/ssl/certs/51391683.0"
	I0908 11:45:55.031482  653679 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 11:45:55.035760  653679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0908 11:45:55.042891  653679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0908 11:45:55.049744  653679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0908 11:45:55.056150  653679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0908 11:45:55.062822  653679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0908 11:45:55.069480  653679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0908 11:45:55.076811  653679 kubeadm.go:392] StartCluster: {Name:functional-982703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-982703 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 11:45:55.076947  653679 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0908 11:45:55.077067  653679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 11:45:55.116158  653679 cri.go:89] found id: "db496053d539cb8b18db74e98dfad7b03bba337a9d25ae0ca7d6961f5f4adf7f"
	I0908 11:45:55.116173  653679 cri.go:89] found id: "016cad6cb498627f9e071ef394bbb7698274cffff4a692221c460fa767dc80d8"
	I0908 11:45:55.116176  653679 cri.go:89] found id: "e3387d94e21e532324ce978c52b5d198ad7d23fe0a9995d18899c5c5f2505e25"
	I0908 11:45:55.116178  653679 cri.go:89] found id: "89a207031e603961a2416d913065acdaa4c0d813015bb101bcbee772637817c4"
	I0908 11:45:55.116180  653679 cri.go:89] found id: "655bb772c77f9c54e947f894f2b5408e378a44a4ec5426abec99efd7315e4aba"
	I0908 11:45:55.116182  653679 cri.go:89] found id: "88e8c46a970af2a0a8330bed0cb2cd07f68d7ba0a0261387a4c6ba49c8ec196f"
	I0908 11:45:55.116184  653679 cri.go:89] found id: "b9a3c59844852e92921a551d8a9a1ccd064a0df62795367d7d1061794aa1795e"
	I0908 11:45:55.116186  653679 cri.go:89] found id: "e6d4d2a05b147621b00c4b5c735c4b838f8470e96d21727b361e8b9689df5993"
	I0908 11:45:55.116188  653679 cri.go:89] found id: "203322c9e6099e8aa5d308a3966f409f94266a3886fea6b8f9b14bc0a38b6779"
	I0908 11:45:55.116195  653679 cri.go:89] found id: "34aaf780afe2d4b88e0da07f2f7c8db130776a7d3333582e1671b67264e2540e"
	I0908 11:45:55.116197  653679 cri.go:89] found id: "0a9b06f3a79d33e5c667879d32460e2671b5d53664fba48d090f8aa18c471653"
	I0908 11:45:55.116199  653679 cri.go:89] found id: "cf8463d84156289714068f7842a37084f56e96102c2374e1791b92f39fe3f2c5"
	I0908 11:45:55.116201  653679 cri.go:89] found id: "0a162cba40e9a6f22a278e6c1f9811de3ed582751e018719b66c69c69fee84e2"
	I0908 11:45:55.116203  653679 cri.go:89] found id: "29a119bf50e6d7a3aec617f55171d4b5d588e882331862e82353747cb2ad5a84"
	I0908 11:45:55.116204  653679 cri.go:89] found id: "16e62cdd4d520608590c5b8316ce52c59fb0e0ce7c7680cec6165402c8349f3b"
	I0908 11:45:55.116207  653679 cri.go:89] found id: "980aa2d1cd0d500b05f31d94d3b8284bde452bb3b827f2026575b1686c2db3d2"
	I0908 11:45:55.116209  653679 cri.go:89] found id: ""
	I0908 11:45:55.116254  653679 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-982703 -n functional-982703
helpers_test.go:269: (dbg) Run:  kubectl --context functional-982703 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-hjrc4 mysql-5bb876957f-jptcl nginx-svc sp-pod
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-982703 describe pod busybox-mount hello-node-75c85bcc94-hjrc4 mysql-5bb876957f-jptcl nginx-svc sp-pod
helpers_test.go:290: (dbg) kubectl --context functional-982703 describe pod busybox-mount hello-node-75c85bcc94-hjrc4 mysql-5bb876957f-jptcl nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-982703/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 11:51:53 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  mount-munger:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4pg5h (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-4pg5h:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  41s   default-scheduler  Successfully assigned default/busybox-mount to functional-982703
	  Normal  Pulling    41s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	
	
	Name:             hello-node-75c85bcc94-hjrc4
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-982703/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 11:46:24 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rhf2d (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rhf2d:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m10s                default-scheduler  Successfully assigned default/hello-node-75c85bcc94-hjrc4 to functional-982703
	  Warning  Failed     95s (x4 over 6m10s)  kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     95s (x4 over 6m10s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    21s (x10 over 6m9s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     21s (x10 over 6m9s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    8s (x5 over 6m10s)   kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             mysql-5bb876957f-jptcl
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-982703/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 11:46:31 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4nfdt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-4nfdt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m3s                 default-scheduler  Successfully assigned default/mysql-5bb876957f-jptcl to functional-982703
	  Warning  Failed     2m5s (x3 over 5m6s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    80s (x4 over 6m2s)   kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     29s (x4 over 5m6s)   kubelet            Error: ErrImagePull
	  Warning  Failed     29s                  kubelet            Failed to pull image "docker.io/mysql:5.7": loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    4s (x7 over 5m5s)    kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     4s (x7 over 5m5s)    kubelet            Error: ImagePullBackOff
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-982703/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 11:46:28 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9mcrq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-9mcrq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m6s                 default-scheduler  Successfully assigned default/nginx-svc to functional-982703
	  Normal   Pulling    115s (x4 over 6m6s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     65s (x4 over 5m36s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     65s (x4 over 5m36s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    1s (x9 over 5m35s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     1s (x9 over 5m35s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-982703/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 11:46:31 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vksvd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-vksvd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m3s                 default-scheduler  Successfully assigned default/sp-pod to functional-982703
	  Warning  Failed     95s (x3 over 4m36s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     95s (x3 over 4m36s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    57s (x5 over 4m35s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     57s (x5 over 4m35s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    44s (x4 over 6m2s)   kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (368.19s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (603.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-982703 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-jptcl" [05dc921d-b1c5-4289-8b99-cf2bf22ea0a7] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:337: TestFunctional/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1804: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1804: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-982703 -n functional-982703
functional_test.go:1804: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2025-09-08 11:56:31.723288864 +0000 UTC m=+1411.016031335
functional_test.go:1804: (dbg) Run:  kubectl --context functional-982703 describe po mysql-5bb876957f-jptcl -n default
functional_test.go:1804: (dbg) kubectl --context functional-982703 describe po mysql-5bb876957f-jptcl -n default:
Name:             mysql-5bb876957f-jptcl
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-982703/192.168.49.2
Start Time:       Mon, 08 Sep 2025 11:46:31 +0000
Labels:           app=mysql
pod-template-hash=5bb876957f
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/mysql-5bb876957f
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4nfdt (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-4nfdt:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/mysql-5bb876957f-jptcl to functional-982703
Warning  Failed     4m26s                 kubelet            Failed to pull image "docker.io/mysql:5.7": loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    3m3s (x5 over 9m59s)  kubelet            Pulling image "docker.io/mysql:5.7"
Warning  Failed     83s (x4 over 9m3s)    kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     83s (x5 over 9m3s)    kubelet            Error: ErrImagePull
Normal   BackOff    10s (x16 over 9m2s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
Warning  Failed     10s (x16 over 9m2s)   kubelet            Error: ImagePullBackOff
functional_test.go:1804: (dbg) Run:  kubectl --context functional-982703 logs mysql-5bb876957f-jptcl -n default
functional_test.go:1804: (dbg) Non-zero exit: kubectl --context functional-982703 logs mysql-5bb876957f-jptcl -n default: exit status 1 (71.366156ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-5bb876957f-jptcl" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1804: kubectl --context functional-982703 logs mysql-5bb876957f-jptcl -n default: exit status 1
functional_test.go:1806: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-982703
helpers_test.go:243: (dbg) docker inspect functional-982703:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "620b8d39c764727eda17f94afadd827084592cb86562df734fadfb4c869ae49d",
	        "Created": "2025-09-08T11:43:57.942130108Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 647387,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-08T11:43:57.979949835Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:863fa02c4a7dcd4571b30c16c1e6ae3eaa1d1a904931aac9472133ae3328c098",
	        "ResolvConfPath": "/var/lib/docker/containers/620b8d39c764727eda17f94afadd827084592cb86562df734fadfb4c869ae49d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/620b8d39c764727eda17f94afadd827084592cb86562df734fadfb4c869ae49d/hostname",
	        "HostsPath": "/var/lib/docker/containers/620b8d39c764727eda17f94afadd827084592cb86562df734fadfb4c869ae49d/hosts",
	        "LogPath": "/var/lib/docker/containers/620b8d39c764727eda17f94afadd827084592cb86562df734fadfb4c869ae49d/620b8d39c764727eda17f94afadd827084592cb86562df734fadfb4c869ae49d-json.log",
	        "Name": "/functional-982703",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-982703:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-982703",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "620b8d39c764727eda17f94afadd827084592cb86562df734fadfb4c869ae49d",
	                "LowerDir": "/var/lib/docker/overlay2/01e0ebe16f32e52d426b0fe3cff4efd0a586f9e9c8094b52873f4bcee3589eff-init/diff:/var/lib/docker/overlay2/e9bf8e09f770f60ed4c810e0202629dccf6a4c304822cce5a8dffd900ae12eff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/01e0ebe16f32e52d426b0fe3cff4efd0a586f9e9c8094b52873f4bcee3589eff/merged",
	                "UpperDir": "/var/lib/docker/overlay2/01e0ebe16f32e52d426b0fe3cff4efd0a586f9e9c8094b52873f4bcee3589eff/diff",
	                "WorkDir": "/var/lib/docker/overlay2/01e0ebe16f32e52d426b0fe3cff4efd0a586f9e9c8094b52873f4bcee3589eff/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-982703",
	                "Source": "/var/lib/docker/volumes/functional-982703/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-982703",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-982703",
	                "name.minikube.sigs.k8s.io": "functional-982703",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6902825b2bb740d3c0467667496661373e1d2904a6767d52684e83e116edad23",
	            "SandboxKey": "/var/run/docker/netns/6902825b2bb7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-982703": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:24:1a:fe:e6:95",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9662c4988a658ac6b4213aad8f235c9b158bb736b526a267a844b3d243ae232c",
	                    "EndpointID": "ecd42e44f56051e04408d02689c47c58881299836b72fa0b768c7a3a98b3eb81",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-982703",
	                        "620b8d39c764"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-982703 -n functional-982703
helpers_test.go:252: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-982703 logs -n 25: (1.736678469s)
helpers_test.go:260: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start     │ -p functional-982703 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                         │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:52 UTC │                     │
	│ start     │ -p functional-982703 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                   │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:52 UTC │                     │
	│ start     │ -p functional-982703 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                         │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:52 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-982703 --alsologtostderr -v=1                                                                    │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:52 UTC │                     │
	│ ssh       │ functional-982703 ssh stat /mount-9p/created-by-test                                                                              │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:52 UTC │ 08 Sep 25 11:52 UTC │
	│ ssh       │ functional-982703 ssh stat /mount-9p/created-by-pod                                                                               │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:52 UTC │ 08 Sep 25 11:52 UTC │
	│ ssh       │ functional-982703 ssh sudo umount -f /mount-9p                                                                                    │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:52 UTC │ 08 Sep 25 11:52 UTC │
	│ mount     │ -p functional-982703 /tmp/TestFunctionalparallelMountCmdspecific-port1884405908/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:52 UTC │                     │
	│ ssh       │ functional-982703 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:52 UTC │                     │
	│ ssh       │ functional-982703 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:52 UTC │ 08 Sep 25 11:52 UTC │
	│ ssh       │ functional-982703 ssh -- ls -la /mount-9p                                                                                         │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:52 UTC │ 08 Sep 25 11:52 UTC │
	│ ssh       │ functional-982703 ssh sudo umount -f /mount-9p                                                                                    │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:52 UTC │                     │
	│ mount     │ -p functional-982703 /tmp/TestFunctionalparallelMountCmdVerifyCleanup366583128/001:/mount1 --alsologtostderr -v=1                 │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:52 UTC │                     │
	│ ssh       │ functional-982703 ssh findmnt -T /mount1                                                                                          │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:52 UTC │                     │
	│ mount     │ -p functional-982703 /tmp/TestFunctionalparallelMountCmdVerifyCleanup366583128/001:/mount2 --alsologtostderr -v=1                 │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:52 UTC │                     │
	│ mount     │ -p functional-982703 /tmp/TestFunctionalparallelMountCmdVerifyCleanup366583128/001:/mount3 --alsologtostderr -v=1                 │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:52 UTC │                     │
	│ ssh       │ functional-982703 ssh findmnt -T /mount1                                                                                          │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:52 UTC │ 08 Sep 25 11:52 UTC │
	│ ssh       │ functional-982703 ssh findmnt -T /mount2                                                                                          │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:52 UTC │ 08 Sep 25 11:52 UTC │
	│ ssh       │ functional-982703 ssh findmnt -T /mount3                                                                                          │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:52 UTC │ 08 Sep 25 11:52 UTC │
	│ mount     │ -p functional-982703 --kill=true                                                                                                  │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:52 UTC │                     │
	│ service   │ functional-982703 service list                                                                                                    │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:56 UTC │ 08 Sep 25 11:56 UTC │
	│ service   │ functional-982703 service list -o json                                                                                            │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:56 UTC │ 08 Sep 25 11:56 UTC │
	│ service   │ functional-982703 service --namespace=default --https --url hello-node                                                            │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:56 UTC │                     │
	│ service   │ functional-982703 service hello-node --url --format={{.IP}}                                                                       │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:56 UTC │                     │
	│ service   │ functional-982703 service hello-node --url                                                                                        │ functional-982703 │ jenkins │ v1.36.0 │ 08 Sep 25 11:56 UTC │                     │
	└───────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 11:52:35
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 11:52:35.101685  664079 out.go:360] Setting OutFile to fd 1 ...
	I0908 11:52:35.102075  664079 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:52:35.102106  664079 out.go:374] Setting ErrFile to fd 2...
	I0908 11:52:35.102114  664079 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:52:35.102913  664079 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-614854/.minikube/bin
	I0908 11:52:35.104207  664079 out.go:368] Setting JSON to false
	I0908 11:52:35.105302  664079 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":9299,"bootTime":1757323056,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 11:52:35.105427  664079 start.go:140] virtualization: kvm guest
	I0908 11:52:35.107319  664079 out.go:179] * [functional-982703] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	I0908 11:52:35.108875  664079 notify.go:220] Checking for updates...
	I0908 11:52:35.108926  664079 out.go:179]   - MINIKUBE_LOCATION=21512
	I0908 11:52:35.110462  664079 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 11:52:35.111752  664079 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21512-614854/kubeconfig
	I0908 11:52:35.112962  664079 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-614854/.minikube
	I0908 11:52:35.114299  664079 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 11:52:35.115722  664079 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 11:52:35.117669  664079 config.go:182] Loaded profile config "functional-982703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 11:52:35.118354  664079 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 11:52:35.142804  664079 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 11:52:35.142929  664079 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 11:52:35.194775  664079 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-09-08 11:52:35.185335554 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 11:52:35.194894  664079 docker.go:318] overlay module found
	I0908 11:52:35.196779  664079 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0908 11:52:35.198028  664079 start.go:304] selected driver: docker
	I0908 11:52:35.198047  664079 start.go:918] validating driver "docker" against &{Name:functional-982703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-982703 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 11:52:35.198174  664079 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 11:52:35.200642  664079 out.go:203] 
	W0908 11:52:35.202109  664079 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0908 11:52:35.203607  664079 out.go:203] 
	
	
	==> CRI-O <==
	Sep 08 11:55:37 functional-982703 crio[4885]: time="2025-09-08 11:55:37.883134889Z" level=info msg="Image docker.io/nginx:alpine not found" id=24642f15-05cb-41da-af1b-f419e965844c name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:55:38 functional-982703 crio[4885]: time="2025-09-08 11:55:38.773489197Z" level=info msg="Pulling image: docker.io/nginx:latest" id=84e1d0d5-d379-489b-9013-f19d4d78d425 name=/runtime.v1.ImageService/PullImage
	Sep 08 11:55:38 functional-982703 crio[4885]: time="2025-09-08 11:55:38.779996651Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Sep 08 11:55:44 functional-982703 crio[4885]: time="2025-09-08 11:55:44.883610069Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=3bcb5890-635e-41be-91e5-0d9f868f7863 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:55:44 functional-982703 crio[4885]: time="2025-09-08 11:55:44.883902320Z" level=info msg="Image docker.io/mysql:5.7 not found" id=3bcb5890-635e-41be-91e5-0d9f868f7863 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:55:49 functional-982703 crio[4885]: time="2025-09-08 11:55:49.882926220Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=604b2d66-7fea-4158-9050-79fd5281f318 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:55:49 functional-982703 crio[4885]: time="2025-09-08 11:55:49.883393107Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=604b2d66-7fea-4158-9050-79fd5281f318 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:55:52 functional-982703 crio[4885]: time="2025-09-08 11:55:52.883465631Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=bc505906-266d-40fd-bfb3-74b1205ab767 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:55:52 functional-982703 crio[4885]: time="2025-09-08 11:55:52.883784248Z" level=info msg="Image docker.io/nginx:alpine not found" id=bc505906-266d-40fd-bfb3-74b1205ab767 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:55:55 functional-982703 crio[4885]: time="2025-09-08 11:55:55.883156048Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=dfef4a7a-7c91-4bc2-ab26-7dc76e6fe248 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:55:55 functional-982703 crio[4885]: time="2025-09-08 11:55:55.883477655Z" level=info msg="Image docker.io/mysql:5.7 not found" id=dfef4a7a-7c91-4bc2-ab26-7dc76e6fe248 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:56:03 functional-982703 crio[4885]: time="2025-09-08 11:56:03.883828038Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=20f709f9-206f-4b36-a9e8-8d8da2715890 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:56:03 functional-982703 crio[4885]: time="2025-09-08 11:56:03.883877908Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=b5399735-f827-4b35-b435-a90235b6a685 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:56:03 functional-982703 crio[4885]: time="2025-09-08 11:56:03.884113205Z" level=info msg="Image docker.io/nginx:alpine not found" id=20f709f9-206f-4b36-a9e8-8d8da2715890 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:56:03 functional-982703 crio[4885]: time="2025-09-08 11:56:03.884196253Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=b5399735-f827-4b35-b435-a90235b6a685 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:56:08 functional-982703 crio[4885]: time="2025-09-08 11:56:08.865753121Z" level=info msg="Pulling image: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=bd783120-43f5-42aa-a987-9d5ce512f064 name=/runtime.v1.ImageService/PullImage
	Sep 08 11:56:08 functional-982703 crio[4885]: time="2025-09-08 11:56:08.870244317Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 08 11:56:09 functional-982703 crio[4885]: time="2025-09-08 11:56:09.883357647Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=8722f37d-9c3a-492c-afbf-f3122d55d92b name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:56:09 functional-982703 crio[4885]: time="2025-09-08 11:56:09.883570202Z" level=info msg="Image docker.io/mysql:5.7 not found" id=8722f37d-9c3a-492c-afbf-f3122d55d92b name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:56:15 functional-982703 crio[4885]: time="2025-09-08 11:56:15.882641502Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=58c6cee8-edf0-400b-b9f1-29f9a12b0165 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:56:15 functional-982703 crio[4885]: time="2025-09-08 11:56:15.882864694Z" level=info msg="Image docker.io/nginx:alpine not found" id=58c6cee8-edf0-400b-b9f1-29f9a12b0165 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:56:21 functional-982703 crio[4885]: time="2025-09-08 11:56:21.883146739Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=f5370d23-dc1d-4468-9ef9-547a819c382a name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:56:21 functional-982703 crio[4885]: time="2025-09-08 11:56:21.883414205Z" level=info msg="Image docker.io/mysql:5.7 not found" id=f5370d23-dc1d-4468-9ef9-547a819c382a name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:56:28 functional-982703 crio[4885]: time="2025-09-08 11:56:28.882945770Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=ebd07a89-b963-466b-8cd7-821280c57434 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 11:56:28 functional-982703 crio[4885]: time="2025-09-08 11:56:28.883300379Z" level=info msg="Image docker.io/nginx:alpine not found" id=ebd07a89-b963-466b-8cd7-821280c57434 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	26a0c977230c0       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   3 minutes ago       Exited              mount-munger              0                   08c62cdff7669       busybox-mount
	4dce901fc8b9a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      10 minutes ago      Running             coredns                   2                   61c67e208f95d       coredns-66bc5c9577-4nfjt
	2237b45001e1c       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      10 minutes ago      Running             kindnet-cni               2                   541d3c4b50a45       kindnet-c84fl
	a70b43c634714       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      10 minutes ago      Running             kube-proxy                2                   30a817e495d15       kube-proxy-bfdlm
	376696c34ca30       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Running             storage-provisioner       3                   b3b650a07f8d2       storage-provisioner
	6cc77c69d4b6b       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                      10 minutes ago      Running             kube-apiserver            0                   4f042f35cc7b1       kube-apiserver-functional-982703
	433baee4a6e87       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      10 minutes ago      Running             kube-scheduler            2                   387f0b53cd073       kube-scheduler-functional-982703
	ee5315cac0c16       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      10 minutes ago      Running             kube-controller-manager   2                   ea25526253ffd       kube-controller-manager-functional-982703
	48080d396cfa6       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      10 minutes ago      Running             etcd                      2                   76b101dbdd1fa       etcd-functional-982703
	db496053d539c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 minutes ago      Exited              storage-provisioner       2                   b3b650a07f8d2       storage-provisioner
	016cad6cb4986       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      11 minutes ago      Exited              kube-scheduler            1                   387f0b53cd073       kube-scheduler-functional-982703
	e3387d94e21e5       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      11 minutes ago      Exited              etcd                      1                   76b101dbdd1fa       etcd-functional-982703
	655bb772c77f9       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      11 minutes ago      Exited              kube-controller-manager   1                   ea25526253ffd       kube-controller-manager-functional-982703
	88e8c46a970af       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      11 minutes ago      Exited              kube-proxy                1                   30a817e495d15       kube-proxy-bfdlm
	e6d4d2a05b147       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 minutes ago      Exited              coredns                   1                   61c67e208f95d       coredns-66bc5c9577-4nfjt
	203322c9e6099       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      11 minutes ago      Exited              kindnet-cni               1                   541d3c4b50a45       kindnet-c84fl
	
	
	==> coredns [4dce901fc8b9ae7b28abed9d5dc7ee69185491287c6795a1765827e63ffe6c48] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49772 - 51858 "HINFO IN 7281298086950625820.5304803197811442486. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.039932s
	
	
	==> coredns [e6d4d2a05b147621b00c4b5c735c4b838f8470e96d21727b361e8b9689df5993] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43757 - 57561 "HINFO IN 2495312604912868128.9219992335971469583. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.028825445s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-982703
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-982703
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a399eb27affc71ce2737faeeac659fc2ce938c64
	                    minikube.k8s.io/name=functional-982703
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T11_44_14_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 11:44:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-982703
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 11:56:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 11:52:58 +0000   Mon, 08 Sep 2025 11:44:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 11:52:58 +0000   Mon, 08 Sep 2025 11:44:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 11:52:58 +0000   Mon, 08 Sep 2025 11:44:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 11:52:58 +0000   Mon, 08 Sep 2025 11:45:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-982703
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a515ea4462b4b4881ee42fa20f9a53c
	  System UUID:                e36415ef-2d7e-47e9-9dd1-8ace78acee2b
	  Boot ID:                    1bb31c1a-3b78-4ea1-9977-d0689f279875
	  Kernel Version:             5.15.0-1083-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-hjrc4                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-w982h           0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  default                     mysql-5bb876957f-jptcl                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-4nfjt                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-functional-982703                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-c84fl                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-982703              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-982703     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-bfdlm                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-982703              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-v9krs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-bq8vj         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-982703 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-982703 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-982703 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-982703 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-982703 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-982703 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           12m                node-controller  Node functional-982703 event: Registered Node functional-982703 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-982703 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-982703 event: Registered Node functional-982703 in Controller
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-982703 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-982703 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-982703 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-982703 event: Registered Node functional-982703 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: 56 2e be 2d 78 d9 92 4f 95 93 e9 d3 08 00
	[  +0.003955] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-f00e2e5550ba
	[  +0.000005] ll header: 00000000: 56 2e be 2d 78 d9 92 4f 95 93 e9 d3 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-f00e2e5550ba
	[  +0.000001] ll header: 00000000: 56 2e be 2d 78 d9 92 4f 95 93 e9 d3 08 00
	[  +8.187305] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-f00e2e5550ba
	[  +0.000030] ll header: 00000000: 56 2e be 2d 78 d9 92 4f 95 93 e9 d3 08 00
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-f00e2e5550ba
	[  +0.000006] ll header: 00000000: 56 2e be 2d 78 d9 92 4f 95 93 e9 d3 08 00
	[  +0.000008] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-f00e2e5550ba
	[  +0.000002] ll header: 00000000: 56 2e be 2d 78 d9 92 4f 95 93 e9 d3 08 00
	[Sep 8 11:36] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fe e6 1d 9e e2 09 2a b3 fe f6 b6 ae 08 00
	[  +1.022122] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fe e6 1d 9e e2 09 2a b3 fe f6 b6 ae 08 00
	[  +2.019826] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fe e6 1d 9e e2 09 2a b3 fe f6 b6 ae 08 00
	[  +4.219629] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fe e6 1d 9e e2 09 2a b3 fe f6 b6 ae 08 00
	[Sep 8 11:37] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: fe e6 1d 9e e2 09 2a b3 fe f6 b6 ae 08 00
	[ +16.130550] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fe e6 1d 9e e2 09 2a b3 fe f6 b6 ae 08 00
	[ +33.273137] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fe e6 1d 9e e2 09 2a b3 fe f6 b6 ae 08 00
	
	
	==> etcd [48080d396cfa6a0dac0c10db85d31d90273dd3c46b7b1ca63062718a84cc9060] <==
	{"level":"warn","ts":"2025-09-08T11:45:59.486089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.494099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.501959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.517248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.524709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.531823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.549977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.583359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.591243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.598207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.605908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.613005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.619748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.633415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.640052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.647566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.654606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.686239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.715518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.722259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.728709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:59.773178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44160","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-08T11:55:58.820866Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1073}
	{"level":"info","ts":"2025-09-08T11:55:58.841526Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1073,"took":"20.174566ms","hash":385303282,"current-db-size-bytes":3702784,"current-db-size":"3.7 MB","current-db-size-in-use-bytes":1843200,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-09-08T11:55:58.841597Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":385303282,"revision":1073,"compact-revision":-1}
	
	
	==> etcd [e3387d94e21e532324ce978c52b5d198ad7d23fe0a9995d18899c5c5f2505e25] <==
	{"level":"warn","ts":"2025-09-08T11:45:14.821317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:14.883471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:14.890380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:14.977383Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:14.979931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:14.987688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:45:15.082194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41030","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-08T11:45:39.449507Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-08T11:45:39.449623Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-982703","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-08T11:45:39.449730Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-08T11:45:39.450918Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-08T11:45:39.591785Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T11:45:39.591852Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-09-08T11:45:39.591901Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-08T11:45:39.591902Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-08T11:45:39.591981Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-08T11:45:39.591997Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-08T11:45:39.591896Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-08T11:45:39.592018Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-08T11:45:39.592024Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T11:45:39.591918Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-08T11:45:39.595215Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-08T11:45:39.595306Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T11:45:39.595339Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-08T11:45:39.595351Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-982703","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 11:56:33 up  2:38,  0 users,  load average: 0.18, 0.33, 1.43
	Linux functional-982703 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [203322c9e6099e8aa5d308a3966f409f94266a3886fea6b8f9b14bc0a38b6779] <==
	I0908 11:45:12.384792       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0908 11:45:12.385228       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0908 11:45:12.385457       1 main.go:148] setting mtu 1500 for CNI 
	I0908 11:45:12.385513       1 main.go:178] kindnetd IP family: "ipv4"
	I0908 11:45:12.385561       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-08T11:45:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	E0908 11:45:12.798076       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I0908 11:45:12.798558       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0908 11:45:12.799396       1 controller.go:381] "Waiting for informer caches to sync"
	I0908 11:45:12.799419       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	E0908 11:45:12.798959       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I0908 11:45:12.799524       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0908 11:45:12.799787       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E0908 11:45:12.877911       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E0908 11:45:16.280146       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E0908 11:45:16.280251       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E0908 11:45:16.280451       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"networkpolicies\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I0908 11:45:19.499723       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0908 11:45:19.499757       1 metrics.go:72] Registering metrics
	I0908 11:45:19.499826       1 controller.go:711] "Syncing nftables rules"
	I0908 11:45:22.799275       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:45:22.799337       1 main.go:301] handling current node
	I0908 11:45:32.798067       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:45:32.798113       1 main.go:301] handling current node
	
	
	==> kindnet [2237b45001e1cf7ff66bdaa188050f3d2093dd00a95c75df5a370b8145a728a9] <==
	I0908 11:54:31.686738       1 main.go:301] handling current node
	I0908 11:54:41.691787       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:54:41.691831       1 main.go:301] handling current node
	I0908 11:54:51.691237       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:54:51.691279       1 main.go:301] handling current node
	I0908 11:55:01.694220       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:55:01.694254       1 main.go:301] handling current node
	I0908 11:55:11.688567       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:55:11.688619       1 main.go:301] handling current node
	I0908 11:55:21.686561       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:55:21.686639       1 main.go:301] handling current node
	I0908 11:55:31.686360       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:55:31.686411       1 main.go:301] handling current node
	I0908 11:55:41.688413       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:55:41.688460       1 main.go:301] handling current node
	I0908 11:55:51.687516       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:55:51.687574       1 main.go:301] handling current node
	I0908 11:56:01.691804       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:56:01.691841       1 main.go:301] handling current node
	I0908 11:56:11.690561       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:56:11.690598       1 main.go:301] handling current node
	I0908 11:56:21.686278       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:56:21.686318       1 main.go:301] handling current node
	I0908 11:56:31.691758       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 11:56:31.691789       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6cc77c69d4b6b4a1fb131f9556f8c040fa5ebad2302358d862f48e91f8348ccd] <==
	I0908 11:46:24.577466       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.108.43.154"}
	I0908 11:46:28.210110       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.103.58.250"}
	I0908 11:46:31.360048       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.96.7.72"}
	I0908 11:47:03.048727       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:47:05.257706       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:48:18.974511       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:48:22.990666       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:49:35.781792       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:49:38.405030       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:50:55.332014       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:50:57.745882       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:51:55.445871       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:52:27.064939       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:52:36.108472       1 controller.go:667] quota admission added evaluator for: namespaces
	I0908 11:52:36.310226       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.26.124"}
	I0908 11:52:36.322366       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.230.11"}
	I0908 11:52:44.800776       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.111.101.239"}
	I0908 11:52:57.204070       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:53:45.294121       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:54:01.274684       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:55:09.484234       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:55:13.542141       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:56:00.385675       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0908 11:56:29.356008       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:56:32.348143       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [655bb772c77f9c54e947f894f2b5408e378a44a4ec5426abec99efd7315e4aba] <==
	I0908 11:45:19.602591       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0908 11:45:19.602630       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0908 11:45:19.602644       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0908 11:45:19.602654       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0908 11:45:19.603759       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 11:45:19.604845       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0908 11:45:19.604976       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 11:45:19.606970       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0908 11:45:19.609325       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0908 11:45:19.611687       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0908 11:45:19.613164       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0908 11:45:19.613359       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 11:45:19.619692       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 11:45:19.619725       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0908 11:45:19.619736       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0908 11:45:19.621797       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0908 11:45:19.645444       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0908 11:45:19.647951       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0908 11:45:19.647984       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0908 11:45:19.648143       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0908 11:45:19.653499       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0908 11:45:19.659980       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0908 11:45:19.668279       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0908 11:45:19.670525       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0908 11:45:19.673894       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	
	
	==> kube-controller-manager [ee5315cac0c163bfb95fa52279fcb70d4d6983114ee68b4a4c285321a03e006b] <==
	I0908 11:46:03.711071       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0908 11:46:03.711532       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0908 11:46:03.711075       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0908 11:46:03.711589       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0908 11:46:03.711602       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0908 11:46:03.712974       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0908 11:46:03.715437       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0908 11:46:03.715558       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0908 11:46:03.716764       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 11:46:03.718946       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0908 11:46:03.719061       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0908 11:46:03.722392       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0908 11:46:03.723692       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 11:46:03.723714       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0908 11:46:03.723720       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0908 11:46:03.725844       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0908 11:46:03.728563       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0908 11:46:03.730796       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0908 11:52:36.176241       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 11:52:36.182456       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 11:52:36.184874       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 11:52:36.188552       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 11:52:36.188697       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 11:52:36.193895       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 11:52:36.197365       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [88e8c46a970af2a0a8330bed0cb2cd07f68d7ba0a0261387a4c6ba49c8ec196f] <==
	I0908 11:45:12.692788       1 server_linux.go:53] "Using iptables proxy"
	I0908 11:45:13.178017       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0908 11:45:16.277983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"functional-982703\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I0908 11:45:17.781517       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 11:45:17.781562       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0908 11:45:17.781669       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 11:45:17.805175       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 11:45:17.805270       1 server_linux.go:132] "Using iptables Proxier"
	I0908 11:45:17.810169       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 11:45:17.810537       1 server.go:527] "Version info" version="v1.34.0"
	I0908 11:45:17.810568       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 11:45:17.811899       1 config.go:106] "Starting endpoint slice config controller"
	I0908 11:45:17.811925       1 config.go:309] "Starting node config controller"
	I0908 11:45:17.811936       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 11:45:17.811937       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 11:45:17.811973       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 11:45:17.812008       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 11:45:17.812016       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 11:45:17.812017       1 config.go:200] "Starting service config controller"
	I0908 11:45:17.812050       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 11:45:17.912766       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 11:45:17.912969       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 11:45:17.913077       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [a70b43c6347149dabe9d023569941ad662e5b3065328f59e8c6635e8711acb52] <==
	I0908 11:46:01.385955       1 server_linux.go:53] "Using iptables proxy"
	I0908 11:46:01.521481       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 11:46:01.622363       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 11:46:01.622406       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0908 11:46:01.622486       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 11:46:01.648521       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 11:46:01.648595       1 server_linux.go:132] "Using iptables Proxier"
	I0908 11:46:01.653541       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 11:46:01.653926       1 server.go:527] "Version info" version="v1.34.0"
	I0908 11:46:01.653964       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 11:46:01.655229       1 config.go:309] "Starting node config controller"
	I0908 11:46:01.655255       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 11:46:01.655266       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 11:46:01.655321       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 11:46:01.655327       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 11:46:01.655374       1 config.go:200] "Starting service config controller"
	I0908 11:46:01.655448       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 11:46:01.655502       1 config.go:106] "Starting endpoint slice config controller"
	I0908 11:46:01.655513       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 11:46:01.755629       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0908 11:46:01.755724       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 11:46:01.755724       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [016cad6cb498627f9e071ef394bbb7698274cffff4a692221c460fa767dc80d8] <==
	I0908 11:45:13.609784       1 serving.go:386] Generated self-signed cert in-memory
	I0908 11:45:16.678163       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0908 11:45:16.678197       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 11:45:16.683814       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0908 11:45:16.683944       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0908 11:45:16.683966       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0908 11:45:16.684785       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0908 11:45:16.685032       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 11:45:16.687311       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 11:45:16.685843       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 11:45:16.687436       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 11:45:16.784235       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0908 11:45:16.793604       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 11:45:16.793765       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 11:45:39.451239       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0908 11:45:39.451311       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I0908 11:45:39.451475       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 11:45:39.451427       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0908 11:45:39.451918       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0908 11:45:39.451958       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0908 11:45:39.452051       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [433baee4a6e87be57add9f7878b2bcc28c6a48f210f5f0aa04a7b4cc377162fb] <==
	I0908 11:45:58.528920       1 serving.go:386] Generated self-signed cert in-memory
	W0908 11:46:00.295224       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0908 11:46:00.295352       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0908 11:46:00.295395       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0908 11:46:00.295432       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0908 11:46:00.480791       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0908 11:46:00.480826       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 11:46:00.483963       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0908 11:46:00.484062       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 11:46:00.484165       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 11:46:00.484166       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0908 11:46:00.584429       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 08 11:55:57 functional-982703 kubelet[5250]: E0908 11:55:57.092183    5250 manager.go:1116] Failed to create existing container: /crio-30a817e495d1513a95ce96b2571af58bebce49a34ee54a58ad277ac22ffb4116: Error finding container 30a817e495d1513a95ce96b2571af58bebce49a34ee54a58ad277ac22ffb4116: Status 404 returned error can't find the container with id 30a817e495d1513a95ce96b2571af58bebce49a34ee54a58ad277ac22ffb4116
	Sep 08 11:55:57 functional-982703 kubelet[5250]: E0908 11:55:57.092343    5250 manager.go:1116] Failed to create existing container: /crio-76b101dbdd1fa1431f380ecfb3a2528592b911632dd606451175fba356acf320: Error finding container 76b101dbdd1fa1431f380ecfb3a2528592b911632dd606451175fba356acf320: Status 404 returned error can't find the container with id 76b101dbdd1fa1431f380ecfb3a2528592b911632dd606451175fba356acf320
	Sep 08 11:55:57 functional-982703 kubelet[5250]: E0908 11:55:57.092524    5250 manager.go:1116] Failed to create existing container: /crio-b3b650a07f8d2793035f731614c04fa50e3030c4dfbc956541541ff1218125f3: Error finding container b3b650a07f8d2793035f731614c04fa50e3030c4dfbc956541541ff1218125f3: Status 404 returned error can't find the container with id b3b650a07f8d2793035f731614c04fa50e3030c4dfbc956541541ff1218125f3
	Sep 08 11:55:57 functional-982703 kubelet[5250]: E0908 11:55:57.092774    5250 manager.go:1116] Failed to create existing container: /docker/620b8d39c764727eda17f94afadd827084592cb86562df734fadfb4c869ae49d/crio-61c67e208f95d78f368b40bfca3c20bee09b85f357ea18bb2b6da4ad97c804b1: Error finding container 61c67e208f95d78f368b40bfca3c20bee09b85f357ea18bb2b6da4ad97c804b1: Status 404 returned error can't find the container with id 61c67e208f95d78f368b40bfca3c20bee09b85f357ea18bb2b6da4ad97c804b1
	Sep 08 11:55:57 functional-982703 kubelet[5250]: E0908 11:55:57.093106    5250 manager.go:1116] Failed to create existing container: /docker/620b8d39c764727eda17f94afadd827084592cb86562df734fadfb4c869ae49d/crio-76b101dbdd1fa1431f380ecfb3a2528592b911632dd606451175fba356acf320: Error finding container 76b101dbdd1fa1431f380ecfb3a2528592b911632dd606451175fba356acf320: Status 404 returned error can't find the container with id 76b101dbdd1fa1431f380ecfb3a2528592b911632dd606451175fba356acf320
	Sep 08 11:55:57 functional-982703 kubelet[5250]: E0908 11:55:57.093285    5250 manager.go:1116] Failed to create existing container: /crio-10a6b432a4c07f1a50de7ca1733e25352cf5b43d36e86ca57f862ec4a2e46378: Error finding container 10a6b432a4c07f1a50de7ca1733e25352cf5b43d36e86ca57f862ec4a2e46378: Status 404 returned error can't find the container with id 10a6b432a4c07f1a50de7ca1733e25352cf5b43d36e86ca57f862ec4a2e46378
	Sep 08 11:55:57 functional-982703 kubelet[5250]: E0908 11:55:57.185598    5250 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757332557185354475  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 08 11:55:57 functional-982703 kubelet[5250]: E0908 11:55:57.185637    5250 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757332557185354475  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 08 11:56:03 functional-982703 kubelet[5250]: E0908 11:56:03.884457    5250 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="5cb91085-c70e-4060-b183-9a58c2b44c2c"
	Sep 08 11:56:07 functional-982703 kubelet[5250]: E0908 11:56:07.186992    5250 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757332567186755977  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 08 11:56:07 functional-982703 kubelet[5250]: E0908 11:56:07.187025    5250 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757332567186755977  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 08 11:56:08 functional-982703 kubelet[5250]: E0908 11:56:08.865280    5250 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 08 11:56:08 functional-982703 kubelet[5250]: E0908 11:56:08.865345    5250 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 08 11:56:08 functional-982703 kubelet[5250]: E0908 11:56:08.865567    5250 kuberuntime_manager.go:1449] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(099fa2a8-93cb-41e5-bc04-b7283dd0c405): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 08 11:56:08 functional-982703 kubelet[5250]: E0908 11:56:08.865628    5250 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="099fa2a8-93cb-41e5-bc04-b7283dd0c405"
	Sep 08 11:56:09 functional-982703 kubelet[5250]: E0908 11:56:09.883939    5250 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-jptcl" podUID="05dc921d-b1c5-4289-8b99-cf2bf22ea0a7"
	Sep 08 11:56:15 functional-982703 kubelet[5250]: E0908 11:56:15.883277    5250 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="5cb91085-c70e-4060-b183-9a58c2b44c2c"
	Sep 08 11:56:17 functional-982703 kubelet[5250]: E0908 11:56:17.188680    5250 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757332577188402787  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 08 11:56:17 functional-982703 kubelet[5250]: E0908 11:56:17.188723    5250 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757332577188402787  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 08 11:56:20 functional-982703 kubelet[5250]: E0908 11:56:20.883207    5250 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="099fa2a8-93cb-41e5-bc04-b7283dd0c405"
	Sep 08 11:56:21 functional-982703 kubelet[5250]: E0908 11:56:21.883834    5250 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-jptcl" podUID="05dc921d-b1c5-4289-8b99-cf2bf22ea0a7"
	Sep 08 11:56:27 functional-982703 kubelet[5250]: E0908 11:56:27.190350    5250 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757332587190103563  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 08 11:56:27 functional-982703 kubelet[5250]: E0908 11:56:27.190396    5250 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757332587190103563  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 08 11:56:28 functional-982703 kubelet[5250]: E0908 11:56:28.883700    5250 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="5cb91085-c70e-4060-b183-9a58c2b44c2c"
	Sep 08 11:56:31 functional-982703 kubelet[5250]: E0908 11:56:31.883108    5250 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="099fa2a8-93cb-41e5-bc04-b7283dd0c405"
	
	
	==> storage-provisioner [376696c34ca30bbc2c9ba32382c554c4721ccf0e62f98b592421f2e54f245671] <==
	W0908 11:56:09.282885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:56:11.286332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:56:11.290679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:56:13.294806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:56:13.301536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:56:15.305199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:56:15.311360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:56:17.314852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:56:17.319284       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:56:19.322958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:56:19.328126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:56:21.331858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:56:21.335975       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:56:23.339068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:56:23.343601       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:56:25.346410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:56:25.350716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:56:27.353710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:56:27.359246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:56:29.362690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:56:29.367214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:56:31.370571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:56:31.376013       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:56:33.379581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:56:33.384619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [db496053d539cb8b18db74e98dfad7b03bba337a9d25ae0ca7d6961f5f4adf7f] <==
	I0908 11:45:24.890428       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0908 11:45:24.898342       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0908 11:45:24.898395       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0908 11:45:24.900904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:45:28.356529       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:45:32.617207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:45:36.216423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:45:39.270880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-982703 -n functional-982703
helpers_test.go:269: (dbg) Run:  kubectl --context functional-982703 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-hjrc4 hello-node-connect-7d85dfc575-w982h mysql-5bb876957f-jptcl nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-v9krs kubernetes-dashboard-855c9754f9-bq8vj
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-982703 describe pod busybox-mount hello-node-75c85bcc94-hjrc4 hello-node-connect-7d85dfc575-w982h mysql-5bb876957f-jptcl nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-v9krs kubernetes-dashboard-855c9754f9-bq8vj
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-982703 describe pod busybox-mount hello-node-75c85bcc94-hjrc4 hello-node-connect-7d85dfc575-w982h mysql-5bb876957f-jptcl nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-v9krs kubernetes-dashboard-855c9754f9-bq8vj: exit status 1 (132.742941ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-982703/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 11:51:53 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://26a0c977230c065341b7f2a3d1081ddb31606fba545a47b7a30641c7f3d8fc73
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 08 Sep 2025 11:52:38 +0000
	      Finished:     Mon, 08 Sep 2025 11:52:38 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4pg5h (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-4pg5h:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  4m41s  default-scheduler  Successfully assigned default/busybox-mount to functional-982703
	  Normal  Pulling    4m41s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     3m56s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.354s (44.4s including waiting). Image size: 4631262 bytes.
	  Normal  Created    3m56s  kubelet            Created container: mount-munger
	  Normal  Started    3m56s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-hjrc4
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-982703/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 11:46:24 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rhf2d (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rhf2d:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-hjrc4 to functional-982703
	  Normal   Pulling    4m8s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     3m56s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     3m56s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     2m36s (x16 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    90s (x21 over 10m)    kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-w982h
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-982703/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 11:52:44 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2mm5t (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-2mm5t:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m50s                default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-w982h to functional-982703
	  Warning  Failed     86s (x2 over 2m56s)  kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     86s (x2 over 2m56s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    75s (x2 over 2m55s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     75s (x2 over 2m55s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    60s (x3 over 3m49s)  kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             mysql-5bb876957f-jptcl
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-982703/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 11:46:31 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4nfdt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-4nfdt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/mysql-5bb876957f-jptcl to functional-982703
	  Warning  Failed     4m29s                kubelet            Failed to pull image "docker.io/mysql:5.7": loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m6s (x5 over 10m)   kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     86s (x4 over 9m6s)   kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     86s (x5 over 9m6s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    13s (x16 over 9m5s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     13s (x16 over 9m5s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-982703/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 11:46:28 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9mcrq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-9mcrq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/nginx-svc to functional-982703
	  Normal   Pulling    3m36s (x5 over 10m)    kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     2m26s (x5 over 9m36s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m26s (x5 over 9m36s)  kubelet            Error: ErrImagePull
	  Warning  Failed     71s (x16 over 9m35s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    6s (x21 over 9m35s)    kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-982703/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 11:46:31 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vksvd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-vksvd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/sp-pod to functional-982703
	  Normal   Pulling    2m34s (x5 over 10m)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     26s (x5 over 8m36s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     26s (x5 over 8m36s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    3s (x12 over 8m35s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     3s (x12 over 8m35s)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-v9krs" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-bq8vj" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-982703 describe pod busybox-mount hello-node-75c85bcc94-hjrc4 hello-node-connect-7d85dfc575-w982h mysql-5bb876957f-jptcl nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-v9krs kubernetes-dashboard-855c9754f9-bq8vj: exit status 1
--- FAIL: TestFunctional/parallel/MySQL (603.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-982703 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-982703 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-hjrc4" [af9a3701-91c8-45e0-a4ca-5f35d3e9c05a] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-982703 -n functional-982703
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-09-08 11:56:24.900267465 +0000 UTC m=+1404.193009954
functional_test.go:1460: (dbg) Run:  kubectl --context functional-982703 describe po hello-node-75c85bcc94-hjrc4 -n default
functional_test.go:1460: (dbg) kubectl --context functional-982703 describe po hello-node-75c85bcc94-hjrc4 -n default:
Name:             hello-node-75c85bcc94-hjrc4
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-982703/192.168.49.2
Start Time:       Mon, 08 Sep 2025 11:46:24 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rhf2d (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-rhf2d:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-hjrc4 to functional-982703
Normal   Pulling    3m58s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     3m46s (x5 over 10m)     kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     3m46s (x5 over 10m)     kubelet            Error: ErrImagePull
Warning  Failed     2m26s (x16 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    80s (x21 over 9m59s)    kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-982703 logs hello-node-75c85bcc94-hjrc4 -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-982703 logs hello-node-75c85bcc94-hjrc4 -n default: exit status 1 (78.973271ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-hjrc4" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-982703 logs hello-node-75c85bcc94-hjrc4 -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.66s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-982703 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [5cb91085-c70e-4060-b183-9a58c2b44c2c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:337: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: WARNING: pod list for "default" "run=nginx-svc" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_tunnel_test.go:216: ***** TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: pod "run=nginx-svc" failed to start within 4m0s: context deadline exceeded ****
functional_test_tunnel_test.go:216: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-982703 -n functional-982703
functional_test_tunnel_test.go:216: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: showing logs for failed pods as of 2025-09-08 11:50:28.524361083 +0000 UTC m=+1047.817103551
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-982703 describe po nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) kubectl --context functional-982703 describe po nginx-svc -n default:
Name:             nginx-svc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-982703/192.168.49.2
Start Time:       Mon, 08 Sep 2025 11:46:28 +0000
Labels:           run=nginx-svc
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:  10.244.0.5
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9mcrq (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-9mcrq:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  4m                   default-scheduler  Successfully assigned default/nginx-svc to functional-982703
Normal   Pulling    92s (x3 over 4m)     kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     29s (x3 over 3m30s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     29s (x3 over 3m30s)  kubelet            Error: ErrImagePull
Normal   BackOff    4s (x4 over 3m29s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     4s (x4 over 3m29s)   kubelet            Error: ImagePullBackOff
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-982703 logs nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) Non-zero exit: kubectl --context functional-982703 logs nginx-svc -n default: exit status 1 (74.277038ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx-svc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:216: kubectl --context functional-982703 logs nginx-svc -n default: exit status 1
functional_test_tunnel_test.go:217: wait: run=nginx-svc within 4m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.67s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (80.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I0908 11:50:28.666888  618620 retry.go:31] will retry after 3.974120439s: Temporary Error: Get "http:": http: no Host in request URL
I0908 11:50:32.641622  618620 retry.go:31] will retry after 4.721932243s: Temporary Error: Get "http:": http: no Host in request URL
I0908 11:50:37.363772  618620 retry.go:31] will retry after 6.944082828s: Temporary Error: Get "http:": http: no Host in request URL
I0908 11:50:44.308878  618620 retry.go:31] will retry after 6.903651211s: Temporary Error: Get "http:": http: no Host in request URL
I0908 11:50:51.212711  618620 retry.go:31] will retry after 13.290667662s: Temporary Error: Get "http:": http: no Host in request URL
E0908 11:50:56.269829  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I0908 11:51:04.504191  618620 retry.go:31] will retry after 25.126457836s: Temporary Error: Get "http:": http: no Host in request URL
E0908 11:51:23.973734  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I0908 11:51:29.631843  618620 retry.go:31] will retry after 19.620775376s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-982703 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
nginx-svc   LoadBalancer   10.103.58.250   10.103.58.250   80:30128/TCP   5m21s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (80.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-982703 service --namespace=default --https --url hello-node: exit status 115 (533.927403ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30211
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-982703 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-982703 service hello-node --url --format={{.IP}}: exit status 115 (536.536376ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-982703 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-982703 service hello-node --url: exit status 115 (537.074373ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30211
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-982703 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30211
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                    
x
+
TestScheduledStopUnix (26.87s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-418489 --memory=3072 --driver=docker  --container-runtime=crio
E0908 12:25:56.268993  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-418489 --memory=3072 --driver=docker  --container-runtime=crio: (22.793027577s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-418489 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-418489 -n scheduled-stop-418489
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-418489 --schedule 15s
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:98: process 804088 running but should have been killed on reschedule of stop
panic.go:636: *** TestScheduledStopUnix FAILED at 2025-09-08 12:26:15.61326203 +0000 UTC m=+3194.906004514
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestScheduledStopUnix]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect scheduled-stop-418489
helpers_test.go:243: (dbg) docker inspect scheduled-stop-418489:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "77347e863d9e2d4950a0819a5e78b5ae3f7defc1b2c945c7e2c6e938ac5716ec",
	        "Created": "2025-09-08T12:25:57.838367272Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 801709,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-08T12:25:57.871414486Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:863fa02c4a7dcd4571b30c16c1e6ae3eaa1d1a904931aac9472133ae3328c098",
	        "ResolvConfPath": "/var/lib/docker/containers/77347e863d9e2d4950a0819a5e78b5ae3f7defc1b2c945c7e2c6e938ac5716ec/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/77347e863d9e2d4950a0819a5e78b5ae3f7defc1b2c945c7e2c6e938ac5716ec/hostname",
	        "HostsPath": "/var/lib/docker/containers/77347e863d9e2d4950a0819a5e78b5ae3f7defc1b2c945c7e2c6e938ac5716ec/hosts",
	        "LogPath": "/var/lib/docker/containers/77347e863d9e2d4950a0819a5e78b5ae3f7defc1b2c945c7e2c6e938ac5716ec/77347e863d9e2d4950a0819a5e78b5ae3f7defc1b2c945c7e2c6e938ac5716ec-json.log",
	        "Name": "/scheduled-stop-418489",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "scheduled-stop-418489:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "scheduled-stop-418489",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "77347e863d9e2d4950a0819a5e78b5ae3f7defc1b2c945c7e2c6e938ac5716ec",
	                "LowerDir": "/var/lib/docker/overlay2/2dbb5c89a8e1b12db7e49ee2d35b4a0d3f0ba300a521c7c3d6d1aa6bb100cfa6-init/diff:/var/lib/docker/overlay2/e9bf8e09f770f60ed4c810e0202629dccf6a4c304822cce5a8dffd900ae12eff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2dbb5c89a8e1b12db7e49ee2d35b4a0d3f0ba300a521c7c3d6d1aa6bb100cfa6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2dbb5c89a8e1b12db7e49ee2d35b4a0d3f0ba300a521c7c3d6d1aa6bb100cfa6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2dbb5c89a8e1b12db7e49ee2d35b4a0d3f0ba300a521c7c3d6d1aa6bb100cfa6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "scheduled-stop-418489",
	                "Source": "/var/lib/docker/volumes/scheduled-stop-418489/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "scheduled-stop-418489",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "scheduled-stop-418489",
	                "name.minikube.sigs.k8s.io": "scheduled-stop-418489",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4d24e54bd075ab1977d33d296ecdd424c429e284430a19548d2a4bf89e685b79",
	            "SandboxKey": "/var/run/docker/netns/4d24e54bd075",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33333"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33334"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33337"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33335"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33336"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "scheduled-stop-418489": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:a3:14:e6:b0:db",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "da54d1dc706148f12402298139e4fff2587d869acedf4b5df1c5e9b598f56a57",
	                    "EndpointID": "3dcb2a510b9c2da1be0077db0f4e8530c974f2ab9a163ce90a7c676033b27ec7",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "scheduled-stop-418489",
	                        "77347e863d9e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-418489 -n scheduled-stop-418489
helpers_test.go:252: <<< TestScheduledStopUnix FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestScheduledStopUnix]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p scheduled-stop-418489 logs -n 25
helpers_test.go:260: TestScheduledStopUnix logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │        PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p multinode-296251                                                                                                                                       │ multinode-296251      │ jenkins │ v1.36.0 │ 08 Sep 25 12:20 UTC │ 08 Sep 25 12:21 UTC │
	│ start   │ -p multinode-296251 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-296251      │ jenkins │ v1.36.0 │ 08 Sep 25 12:21 UTC │ 08 Sep 25 12:22 UTC │
	│ node    │ list -p multinode-296251                                                                                                                                  │ multinode-296251      │ jenkins │ v1.36.0 │ 08 Sep 25 12:22 UTC │                     │
	│ node    │ multinode-296251 node delete m03                                                                                                                          │ multinode-296251      │ jenkins │ v1.36.0 │ 08 Sep 25 12:22 UTC │ 08 Sep 25 12:22 UTC │
	│ stop    │ multinode-296251 stop                                                                                                                                     │ multinode-296251      │ jenkins │ v1.36.0 │ 08 Sep 25 12:22 UTC │ 08 Sep 25 12:22 UTC │
	│ start   │ -p multinode-296251 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio                                                          │ multinode-296251      │ jenkins │ v1.36.0 │ 08 Sep 25 12:22 UTC │ 08 Sep 25 12:23 UTC │
	│ node    │ list -p multinode-296251                                                                                                                                  │ multinode-296251      │ jenkins │ v1.36.0 │ 08 Sep 25 12:23 UTC │                     │
	│ start   │ -p multinode-296251-m02 --driver=docker  --container-runtime=crio                                                                                         │ multinode-296251-m02  │ jenkins │ v1.36.0 │ 08 Sep 25 12:23 UTC │                     │
	│ start   │ -p multinode-296251-m03 --driver=docker  --container-runtime=crio                                                                                         │ multinode-296251-m03  │ jenkins │ v1.36.0 │ 08 Sep 25 12:23 UTC │ 08 Sep 25 12:23 UTC │
	│ node    │ add -p multinode-296251                                                                                                                                   │ multinode-296251      │ jenkins │ v1.36.0 │ 08 Sep 25 12:23 UTC │                     │
	│ delete  │ -p multinode-296251-m03                                                                                                                                   │ multinode-296251-m03  │ jenkins │ v1.36.0 │ 08 Sep 25 12:23 UTC │ 08 Sep 25 12:23 UTC │
	│ delete  │ -p multinode-296251                                                                                                                                       │ multinode-296251      │ jenkins │ v1.36.0 │ 08 Sep 25 12:23 UTC │ 08 Sep 25 12:23 UTC │
	│ start   │ -p test-preload-480072 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0 │ test-preload-480072   │ jenkins │ v1.36.0 │ 08 Sep 25 12:23 UTC │ 08 Sep 25 12:24 UTC │
	│ image   │ test-preload-480072 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-480072   │ jenkins │ v1.36.0 │ 08 Sep 25 12:24 UTC │ 08 Sep 25 12:24 UTC │
	│ stop    │ -p test-preload-480072                                                                                                                                    │ test-preload-480072   │ jenkins │ v1.36.0 │ 08 Sep 25 12:24 UTC │ 08 Sep 25 12:24 UTC │
	│ start   │ -p test-preload-480072 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio                                         │ test-preload-480072   │ jenkins │ v1.36.0 │ 08 Sep 25 12:24 UTC │ 08 Sep 25 12:25 UTC │
	│ image   │ test-preload-480072 image list                                                                                                                            │ test-preload-480072   │ jenkins │ v1.36.0 │ 08 Sep 25 12:25 UTC │ 08 Sep 25 12:25 UTC │
	│ delete  │ -p test-preload-480072                                                                                                                                    │ test-preload-480072   │ jenkins │ v1.36.0 │ 08 Sep 25 12:25 UTC │ 08 Sep 25 12:25 UTC │
	│ start   │ -p scheduled-stop-418489 --memory=3072 --driver=docker  --container-runtime=crio                                                                          │ scheduled-stop-418489 │ jenkins │ v1.36.0 │ 08 Sep 25 12:25 UTC │ 08 Sep 25 12:26 UTC │
	│ stop    │ -p scheduled-stop-418489 --schedule 5m                                                                                                                    │ scheduled-stop-418489 │ jenkins │ v1.36.0 │ 08 Sep 25 12:26 UTC │                     │
	│ stop    │ -p scheduled-stop-418489 --schedule 5m                                                                                                                    │ scheduled-stop-418489 │ jenkins │ v1.36.0 │ 08 Sep 25 12:26 UTC │                     │
	│ stop    │ -p scheduled-stop-418489 --schedule 5m                                                                                                                    │ scheduled-stop-418489 │ jenkins │ v1.36.0 │ 08 Sep 25 12:26 UTC │                     │
	│ stop    │ -p scheduled-stop-418489 --schedule 15s                                                                                                                   │ scheduled-stop-418489 │ jenkins │ v1.36.0 │ 08 Sep 25 12:26 UTC │                     │
	│ stop    │ -p scheduled-stop-418489 --schedule 15s                                                                                                                   │ scheduled-stop-418489 │ jenkins │ v1.36.0 │ 08 Sep 25 12:26 UTC │                     │
	│ stop    │ -p scheduled-stop-418489 --schedule 15s                                                                                                                   │ scheduled-stop-418489 │ jenkins │ v1.36.0 │ 08 Sep 25 12:26 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 12:25:52
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 12:25:52.437523  801176 out.go:360] Setting OutFile to fd 1 ...
	I0908 12:25:52.437815  801176 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:25:52.437820  801176 out.go:374] Setting ErrFile to fd 2...
	I0908 12:25:52.437822  801176 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:25:52.438008  801176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-614854/.minikube/bin
	I0908 12:25:52.438632  801176 out.go:368] Setting JSON to false
	I0908 12:25:52.439632  801176 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":11296,"bootTime":1757323056,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 12:25:52.439747  801176 start.go:140] virtualization: kvm guest
	I0908 12:25:52.441934  801176 out.go:179] * [scheduled-stop-418489] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 12:25:52.443368  801176 notify.go:220] Checking for updates...
	I0908 12:25:52.443418  801176 out.go:179]   - MINIKUBE_LOCATION=21512
	I0908 12:25:52.445146  801176 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 12:25:52.446632  801176 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21512-614854/kubeconfig
	I0908 12:25:52.448123  801176 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-614854/.minikube
	I0908 12:25:52.449385  801176 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 12:25:52.450531  801176 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 12:25:52.451777  801176 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 12:25:52.476209  801176 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 12:25:52.476324  801176 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:25:52.526411  801176 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:43 SystemTime:2025-09-08 12:25:52.51724844 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 12:25:52.526514  801176 docker.go:318] overlay module found
	I0908 12:25:52.528504  801176 out.go:179] * Using the docker driver based on user configuration
	I0908 12:25:52.529815  801176 start.go:304] selected driver: docker
	I0908 12:25:52.529828  801176 start.go:918] validating driver "docker" against <nil>
	I0908 12:25:52.529841  801176 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 12:25:52.530769  801176 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:25:52.580712  801176 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:43 SystemTime:2025-09-08 12:25:52.571260553 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 12:25:52.580865  801176 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 12:25:52.581055  801176 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0908 12:25:52.582845  801176 out.go:179] * Using Docker driver with root privileges
	I0908 12:25:52.584026  801176 cni.go:84] Creating CNI manager for ""
	I0908 12:25:52.584095  801176 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 12:25:52.584103  801176 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0908 12:25:52.584186  801176 start.go:348] cluster config:
	{Name:scheduled-stop-418489 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:scheduled-stop-418489 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:25:52.585736  801176 out.go:179] * Starting "scheduled-stop-418489" primary control-plane node in "scheduled-stop-418489" cluster
	I0908 12:25:52.586949  801176 cache.go:123] Beginning downloading kic base image for docker with crio
	I0908 12:25:52.588275  801176 out.go:179] * Pulling base image v0.0.47-1756980985-21488 ...
	I0908 12:25:52.589504  801176 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 12:25:52.589543  801176 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21512-614854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0908 12:25:52.589552  801176 cache.go:58] Caching tarball of preloaded images
	I0908 12:25:52.589647  801176 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 12:25:52.589665  801176 preload.go:172] Found /home/jenkins/minikube-integration/21512-614854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0908 12:25:52.589674  801176 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0908 12:25:52.590040  801176 profile.go:143] Saving config to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/scheduled-stop-418489/config.json ...
	I0908 12:25:52.590059  801176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/scheduled-stop-418489/config.json: {Name:mk9bc5cceda2f586de65f5b8b53f36c544ac9e2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:25:52.611166  801176 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon, skipping pull
	I0908 12:25:52.611189  801176 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in daemon, skipping load
	I0908 12:25:52.611210  801176 cache.go:232] Successfully downloaded all kic artifacts
	I0908 12:25:52.611238  801176 start.go:360] acquireMachinesLock for scheduled-stop-418489: {Name:mke583662a7076edb1cfc3a0a155aab9750b4625 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 12:25:52.611366  801176 start.go:364] duration metric: took 112.549µs to acquireMachinesLock for "scheduled-stop-418489"
	I0908 12:25:52.611394  801176 start.go:93] Provisioning new machine with config: &{Name:scheduled-stop-418489 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:scheduled-stop-418489 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 12:25:52.611501  801176 start.go:125] createHost starting for "" (driver="docker")
	I0908 12:25:52.613637  801176 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0908 12:25:52.613887  801176 start.go:159] libmachine.API.Create for "scheduled-stop-418489" (driver="docker")
	I0908 12:25:52.613919  801176 client.go:168] LocalClient.Create starting
	I0908 12:25:52.613989  801176 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem
	I0908 12:25:52.614019  801176 main.go:141] libmachine: Decoding PEM data...
	I0908 12:25:52.614032  801176 main.go:141] libmachine: Parsing certificate...
	I0908 12:25:52.614112  801176 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21512-614854/.minikube/certs/cert.pem
	I0908 12:25:52.614130  801176 main.go:141] libmachine: Decoding PEM data...
	I0908 12:25:52.614136  801176 main.go:141] libmachine: Parsing certificate...
	I0908 12:25:52.614457  801176 cli_runner.go:164] Run: docker network inspect scheduled-stop-418489 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0908 12:25:52.631553  801176 cli_runner.go:211] docker network inspect scheduled-stop-418489 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0908 12:25:52.631625  801176 network_create.go:284] running [docker network inspect scheduled-stop-418489] to gather additional debugging logs...
	I0908 12:25:52.631638  801176 cli_runner.go:164] Run: docker network inspect scheduled-stop-418489
	W0908 12:25:52.649116  801176 cli_runner.go:211] docker network inspect scheduled-stop-418489 returned with exit code 1
	I0908 12:25:52.649138  801176 network_create.go:287] error running [docker network inspect scheduled-stop-418489]: docker network inspect scheduled-stop-418489: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network scheduled-stop-418489 not found
	I0908 12:25:52.649150  801176 network_create.go:289] output of [docker network inspect scheduled-stop-418489]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network scheduled-stop-418489 not found
	
	** /stderr **
	I0908 12:25:52.649250  801176 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 12:25:52.668170  801176 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a42c506aba4a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8e:58:c9:24:f1:2c} reservation:<nil>}
	I0908 12:25:52.668701  801176 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-05a0e362ea2b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:9a:f7:2e:f5:bc} reservation:<nil>}
	I0908 12:25:52.669227  801176 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3274e51da707 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:f1:f1:05:f9:70} reservation:<nil>}
	I0908 12:25:52.669879  801176 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d3d070}
	I0908 12:25:52.669901  801176 network_create.go:124] attempt to create docker network scheduled-stop-418489 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0908 12:25:52.669959  801176 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=scheduled-stop-418489 scheduled-stop-418489
	I0908 12:25:52.730789  801176 network_create.go:108] docker network scheduled-stop-418489 192.168.76.0/24 created
	I0908 12:25:52.730817  801176 kic.go:121] calculated static IP "192.168.76.2" for the "scheduled-stop-418489" container
	I0908 12:25:52.730905  801176 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0908 12:25:52.748640  801176 cli_runner.go:164] Run: docker volume create scheduled-stop-418489 --label name.minikube.sigs.k8s.io=scheduled-stop-418489 --label created_by.minikube.sigs.k8s.io=true
	I0908 12:25:52.768168  801176 oci.go:103] Successfully created a docker volume scheduled-stop-418489
	I0908 12:25:52.768241  801176 cli_runner.go:164] Run: docker run --rm --name scheduled-stop-418489-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-418489 --entrypoint /usr/bin/test -v scheduled-stop-418489:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -d /var/lib
	I0908 12:25:53.208794  801176 oci.go:107] Successfully prepared a docker volume scheduled-stop-418489
	I0908 12:25:53.208846  801176 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 12:25:53.208869  801176 kic.go:194] Starting extracting preloaded images to volume ...
	I0908 12:25:53.208956  801176 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21512-614854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-418489:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0908 12:25:57.769498  801176 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21512-614854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-418489:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -I lz4 -xf /preloaded.tar -C /extractDir: (4.560468894s)
	I0908 12:25:57.769523  801176 kic.go:203] duration metric: took 4.560650012s to extract preloaded images to volume ...
	W0908 12:25:57.769690  801176 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0908 12:25:57.769804  801176 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0908 12:25:57.821913  801176 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname scheduled-stop-418489 --name scheduled-stop-418489 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-418489 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=scheduled-stop-418489 --network scheduled-stop-418489 --ip 192.168.76.2 --volume scheduled-stop-418489:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79
	I0908 12:25:58.082919  801176 cli_runner.go:164] Run: docker container inspect scheduled-stop-418489 --format={{.State.Running}}
	I0908 12:25:58.101229  801176 cli_runner.go:164] Run: docker container inspect scheduled-stop-418489 --format={{.State.Status}}
	I0908 12:25:58.121151  801176 cli_runner.go:164] Run: docker exec scheduled-stop-418489 stat /var/lib/dpkg/alternatives/iptables
	I0908 12:25:58.167686  801176 oci.go:144] the created container "scheduled-stop-418489" has a running status.
	I0908 12:25:58.167718  801176 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21512-614854/.minikube/machines/scheduled-stop-418489/id_rsa...
	I0908 12:25:58.451774  801176 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21512-614854/.minikube/machines/scheduled-stop-418489/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0908 12:25:58.479959  801176 cli_runner.go:164] Run: docker container inspect scheduled-stop-418489 --format={{.State.Status}}
	I0908 12:25:58.506286  801176 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0908 12:25:58.506300  801176 kic_runner.go:114] Args: [docker exec --privileged scheduled-stop-418489 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0908 12:25:58.586867  801176 cli_runner.go:164] Run: docker container inspect scheduled-stop-418489 --format={{.State.Status}}
	I0908 12:25:58.610638  801176 machine.go:93] provisionDockerMachine start ...
	I0908 12:25:58.610744  801176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-418489
	I0908 12:25:58.639080  801176 main.go:141] libmachine: Using SSH client type: native
	I0908 12:25:58.639500  801176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33333 <nil> <nil>}
	I0908 12:25:58.639513  801176 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 12:25:58.791918  801176 main.go:141] libmachine: SSH cmd err, output: <nil>: scheduled-stop-418489
	
	I0908 12:25:58.791998  801176 ubuntu.go:182] provisioning hostname "scheduled-stop-418489"
	I0908 12:25:58.792102  801176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-418489
	I0908 12:25:58.811806  801176 main.go:141] libmachine: Using SSH client type: native
	I0908 12:25:58.812103  801176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33333 <nil> <nil>}
	I0908 12:25:58.812116  801176 main.go:141] libmachine: About to run SSH command:
	sudo hostname scheduled-stop-418489 && echo "scheduled-stop-418489" | sudo tee /etc/hostname
	I0908 12:25:58.949441  801176 main.go:141] libmachine: SSH cmd err, output: <nil>: scheduled-stop-418489
	
	I0908 12:25:58.949518  801176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-418489
	I0908 12:25:58.969314  801176 main.go:141] libmachine: Using SSH client type: native
	I0908 12:25:58.969598  801176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33333 <nil> <nil>}
	I0908 12:25:58.969619  801176 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sscheduled-stop-418489' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 scheduled-stop-418489/g' /etc/hosts;
				else 
					echo '127.0.1.1 scheduled-stop-418489' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 12:25:59.092731  801176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 12:25:59.092756  801176 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21512-614854/.minikube CaCertPath:/home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21512-614854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21512-614854/.minikube}
	I0908 12:25:59.092780  801176 ubuntu.go:190] setting up certificates
	I0908 12:25:59.092794  801176 provision.go:84] configureAuth start
	I0908 12:25:59.092867  801176 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-418489
	I0908 12:25:59.110923  801176 provision.go:143] copyHostCerts
	I0908 12:25:59.110985  801176 exec_runner.go:144] found /home/jenkins/minikube-integration/21512-614854/.minikube/ca.pem, removing ...
	I0908 12:25:59.110993  801176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21512-614854/.minikube/ca.pem
	I0908 12:25:59.111073  801176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21512-614854/.minikube/ca.pem (1082 bytes)
	I0908 12:25:59.111169  801176 exec_runner.go:144] found /home/jenkins/minikube-integration/21512-614854/.minikube/cert.pem, removing ...
	I0908 12:25:59.111172  801176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21512-614854/.minikube/cert.pem
	I0908 12:25:59.111193  801176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21512-614854/.minikube/cert.pem (1123 bytes)
	I0908 12:25:59.111255  801176 exec_runner.go:144] found /home/jenkins/minikube-integration/21512-614854/.minikube/key.pem, removing ...
	I0908 12:25:59.111259  801176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21512-614854/.minikube/key.pem
	I0908 12:25:59.111277  801176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21512-614854/.minikube/key.pem (1675 bytes)
	I0908 12:25:59.111333  801176 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21512-614854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca-key.pem org=jenkins.scheduled-stop-418489 san=[127.0.0.1 192.168.76.2 localhost minikube scheduled-stop-418489]
	I0908 12:25:59.228967  801176 provision.go:177] copyRemoteCerts
	I0908 12:25:59.229036  801176 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 12:25:59.229089  801176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-418489
	I0908 12:25:59.248489  801176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33333 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/scheduled-stop-418489/id_rsa Username:docker}
	I0908 12:25:59.341746  801176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 12:25:59.368156  801176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0908 12:25:59.395991  801176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0908 12:25:59.422733  801176 provision.go:87] duration metric: took 329.925294ms to configureAuth
	I0908 12:25:59.422755  801176 ubuntu.go:206] setting minikube options for container-runtime
	I0908 12:25:59.422974  801176 config.go:182] Loaded profile config "scheduled-stop-418489": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 12:25:59.423126  801176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-418489
	I0908 12:25:59.442366  801176 main.go:141] libmachine: Using SSH client type: native
	I0908 12:25:59.442597  801176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33333 <nil> <nil>}
	I0908 12:25:59.442607  801176 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0908 12:25:59.664798  801176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0908 12:25:59.664815  801176 machine.go:96] duration metric: took 1.054161974s to provisionDockerMachine
	I0908 12:25:59.664824  801176 client.go:171] duration metric: took 7.05090072s to LocalClient.Create
	I0908 12:25:59.664843  801176 start.go:167] duration metric: took 7.050958889s to libmachine.API.Create "scheduled-stop-418489"
	I0908 12:25:59.664848  801176 start.go:293] postStartSetup for "scheduled-stop-418489" (driver="docker")
	I0908 12:25:59.664857  801176 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 12:25:59.664907  801176 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 12:25:59.664953  801176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-418489
	I0908 12:25:59.683578  801176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33333 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/scheduled-stop-418489/id_rsa Username:docker}
	I0908 12:25:59.773426  801176 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 12:25:59.777156  801176 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0908 12:25:59.777176  801176 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0908 12:25:59.777181  801176 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0908 12:25:59.777188  801176 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0908 12:25:59.777198  801176 filesync.go:126] Scanning /home/jenkins/minikube-integration/21512-614854/.minikube/addons for local assets ...
	I0908 12:25:59.777256  801176 filesync.go:126] Scanning /home/jenkins/minikube-integration/21512-614854/.minikube/files for local assets ...
	I0908 12:25:59.777325  801176 filesync.go:149] local asset: /home/jenkins/minikube-integration/21512-614854/.minikube/files/etc/ssl/certs/6186202.pem -> 6186202.pem in /etc/ssl/certs
	I0908 12:25:59.777412  801176 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 12:25:59.786822  801176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/files/etc/ssl/certs/6186202.pem --> /etc/ssl/certs/6186202.pem (1708 bytes)
	I0908 12:25:59.811706  801176 start.go:296] duration metric: took 146.842663ms for postStartSetup
	I0908 12:25:59.812082  801176 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-418489
	I0908 12:25:59.830930  801176 profile.go:143] Saving config to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/scheduled-stop-418489/config.json ...
	I0908 12:25:59.831236  801176 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 12:25:59.831274  801176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-418489
	I0908 12:25:59.849470  801176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33333 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/scheduled-stop-418489/id_rsa Username:docker}
	I0908 12:25:59.941152  801176 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0908 12:25:59.945836  801176 start.go:128] duration metric: took 7.334318466s to createHost
	I0908 12:25:59.945855  801176 start.go:83] releasing machines lock for "scheduled-stop-418489", held for 7.334480932s
	I0908 12:25:59.945926  801176 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-418489
	I0908 12:25:59.963816  801176 ssh_runner.go:195] Run: cat /version.json
	I0908 12:25:59.963859  801176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-418489
	I0908 12:25:59.963889  801176 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 12:25:59.963958  801176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-418489
	I0908 12:25:59.982550  801176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33333 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/scheduled-stop-418489/id_rsa Username:docker}
	I0908 12:25:59.982779  801176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33333 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/scheduled-stop-418489/id_rsa Username:docker}
	I0908 12:26:00.067863  801176 ssh_runner.go:195] Run: systemctl --version
	I0908 12:26:00.142910  801176 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0908 12:26:00.284714  801176 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0908 12:26:00.289842  801176 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 12:26:00.309978  801176 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0908 12:26:00.310062  801176 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 12:26:00.339607  801176 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0908 12:26:00.339622  801176 start.go:495] detecting cgroup driver to use...
	I0908 12:26:00.339678  801176 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0908 12:26:00.339722  801176 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 12:26:00.356862  801176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 12:26:00.368827  801176 docker.go:218] disabling cri-docker service (if available) ...
	I0908 12:26:00.368880  801176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 12:26:00.382985  801176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 12:26:00.397886  801176 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 12:26:00.476600  801176 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 12:26:00.556079  801176 docker.go:234] disabling docker service ...
	I0908 12:26:00.556141  801176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 12:26:00.575165  801176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 12:26:00.586697  801176 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 12:26:00.664734  801176 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 12:26:00.752008  801176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 12:26:00.763707  801176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 12:26:00.780514  801176 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0908 12:26:00.780573  801176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:26:00.790754  801176 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0908 12:26:00.790810  801176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:26:00.801377  801176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:26:00.811435  801176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:26:00.821861  801176 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 12:26:00.831604  801176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:26:00.841597  801176 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:26:00.857870  801176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:26:00.867910  801176 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 12:26:00.876676  801176 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 12:26:00.885912  801176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:26:00.963732  801176 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0908 12:26:01.084802  801176 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0908 12:26:01.084857  801176 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0908 12:26:01.088809  801176 start.go:563] Will wait 60s for crictl version
	I0908 12:26:01.088864  801176 ssh_runner.go:195] Run: which crictl
	I0908 12:26:01.092753  801176 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 12:26:01.129847  801176 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0908 12:26:01.129938  801176 ssh_runner.go:195] Run: crio --version
	I0908 12:26:01.167308  801176 ssh_runner.go:195] Run: crio --version
	I0908 12:26:01.208497  801176 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0908 12:26:01.209982  801176 cli_runner.go:164] Run: docker network inspect scheduled-stop-418489 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 12:26:01.227385  801176 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0908 12:26:01.231339  801176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 12:26:01.242687  801176 kubeadm.go:875] updating cluster {Name:scheduled-stop-418489 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:scheduled-stop-418489 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 12:26:01.242804  801176 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 12:26:01.242861  801176 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 12:26:01.313510  801176 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 12:26:01.313524  801176 crio.go:433] Images already preloaded, skipping extraction
	I0908 12:26:01.313578  801176 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 12:26:01.350320  801176 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 12:26:01.350359  801176 cache_images.go:85] Images are preloaded, skipping loading
	I0908 12:26:01.350368  801176 kubeadm.go:926] updating node { 192.168.76.2 8443 v1.34.0 crio true true} ...
	I0908 12:26:01.350535  801176 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=scheduled-stop-418489 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:scheduled-stop-418489 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 12:26:01.350635  801176 ssh_runner.go:195] Run: crio config
	I0908 12:26:01.396250  801176 cni.go:84] Creating CNI manager for ""
	I0908 12:26:01.396265  801176 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 12:26:01.396277  801176 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 12:26:01.396299  801176 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:scheduled-stop-418489 NodeName:scheduled-stop-418489 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 12:26:01.396430  801176 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "scheduled-stop-418489"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 12:26:01.396497  801176 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 12:26:01.405882  801176 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 12:26:01.405935  801176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 12:26:01.414715  801176 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (371 bytes)
	I0908 12:26:01.433237  801176 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 12:26:01.452087  801176 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I0908 12:26:01.470406  801176 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0908 12:26:01.474184  801176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 12:26:01.486371  801176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:26:01.563572  801176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 12:26:01.578770  801176 certs.go:68] Setting up /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/scheduled-stop-418489 for IP: 192.168.76.2
	I0908 12:26:01.578791  801176 certs.go:194] generating shared ca certs ...
	I0908 12:26:01.578815  801176 certs.go:226] acquiring lock for ca certs: {Name:mkfa58237bde04d2b800b1fdade18ddbc226533f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:26:01.579018  801176 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21512-614854/.minikube/ca.key
	I0908 12:26:01.579078  801176 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21512-614854/.minikube/proxy-client-ca.key
	I0908 12:26:01.579087  801176 certs.go:256] generating profile certs ...
	I0908 12:26:01.579166  801176 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/scheduled-stop-418489/client.key
	I0908 12:26:01.579183  801176 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/scheduled-stop-418489/client.crt with IP's: []
	I0908 12:26:01.927789  801176 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/scheduled-stop-418489/client.crt ...
	I0908 12:26:01.927808  801176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/scheduled-stop-418489/client.crt: {Name:mk9e9903a89cc1c3bde8069b881cdd69764c4731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:26:01.927995  801176 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/scheduled-stop-418489/client.key ...
	I0908 12:26:01.928019  801176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/scheduled-stop-418489/client.key: {Name:mk31d0df77ad862ae0299539fc1efcaf399cfffb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:26:01.928099  801176 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/scheduled-stop-418489/apiserver.key.4d63521c
	I0908 12:26:01.928110  801176 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/scheduled-stop-418489/apiserver.crt.4d63521c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0908 12:26:02.265817  801176 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/scheduled-stop-418489/apiserver.crt.4d63521c ...
	I0908 12:26:02.265837  801176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/scheduled-stop-418489/apiserver.crt.4d63521c: {Name:mkcfd052d2be79a2314b1336ebe34621c3b97823 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:26:02.266029  801176 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/scheduled-stop-418489/apiserver.key.4d63521c ...
	I0908 12:26:02.266038  801176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/scheduled-stop-418489/apiserver.key.4d63521c: {Name:mkbc7cac3e188211a514bab5edffbef1af3303d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:26:02.266117  801176 certs.go:381] copying /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/scheduled-stop-418489/apiserver.crt.4d63521c -> /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/scheduled-stop-418489/apiserver.crt
	I0908 12:26:02.266189  801176 certs.go:385] copying /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/scheduled-stop-418489/apiserver.key.4d63521c -> /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/scheduled-stop-418489/apiserver.key
	I0908 12:26:02.266237  801176 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/scheduled-stop-418489/proxy-client.key
	I0908 12:26:02.266247  801176 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/scheduled-stop-418489/proxy-client.crt with IP's: []
	I0908 12:26:02.478925  801176 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/scheduled-stop-418489/proxy-client.crt ...
	I0908 12:26:02.478943  801176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/scheduled-stop-418489/proxy-client.crt: {Name:mk9f1720c7d3a8d0ec6bdbc7ee36cb234855c4b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:26:02.479142  801176 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/scheduled-stop-418489/proxy-client.key ...
	I0908 12:26:02.479152  801176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/scheduled-stop-418489/proxy-client.key: {Name:mk034b28e0cb372b7c90e7629aef7befbb9c599e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:26:02.479331  801176 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/618620.pem (1338 bytes)
	W0908 12:26:02.479366  801176 certs.go:480] ignoring /home/jenkins/minikube-integration/21512-614854/.minikube/certs/618620_empty.pem, impossibly tiny 0 bytes
	I0908 12:26:02.479372  801176 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca-key.pem (1675 bytes)
	I0908 12:26:02.479391  801176 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem (1082 bytes)
	I0908 12:26:02.479412  801176 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/cert.pem (1123 bytes)
	I0908 12:26:02.479432  801176 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/key.pem (1675 bytes)
	I0908 12:26:02.479464  801176 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/files/etc/ssl/certs/6186202.pem (1708 bytes)
	I0908 12:26:02.480167  801176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 12:26:02.505796  801176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0908 12:26:02.531215  801176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 12:26:02.555878  801176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0908 12:26:02.581608  801176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/scheduled-stop-418489/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0908 12:26:02.606647  801176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/scheduled-stop-418489/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0908 12:26:02.632214  801176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/scheduled-stop-418489/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 12:26:02.658076  801176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/scheduled-stop-418489/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0908 12:26:02.684558  801176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/files/etc/ssl/certs/6186202.pem --> /usr/share/ca-certificates/6186202.pem (1708 bytes)
	I0908 12:26:02.710343  801176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 12:26:02.737050  801176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/certs/618620.pem --> /usr/share/ca-certificates/618620.pem (1338 bytes)
	I0908 12:26:02.762409  801176 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 12:26:02.780949  801176 ssh_runner.go:195] Run: openssl version
	I0908 12:26:02.786604  801176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6186202.pem && ln -fs /usr/share/ca-certificates/6186202.pem /etc/ssl/certs/6186202.pem"
	I0908 12:26:02.796634  801176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6186202.pem
	I0908 12:26:02.800554  801176 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 11:43 /usr/share/ca-certificates/6186202.pem
	I0908 12:26:02.800615  801176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6186202.pem
	I0908 12:26:02.807812  801176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6186202.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 12:26:02.817745  801176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 12:26:02.827433  801176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:26:02.830968  801176 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 11:33 /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:26:02.831016  801176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:26:02.837875  801176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 12:26:02.847558  801176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/618620.pem && ln -fs /usr/share/ca-certificates/618620.pem /etc/ssl/certs/618620.pem"
	I0908 12:26:02.856986  801176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/618620.pem
	I0908 12:26:02.860618  801176 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 11:43 /usr/share/ca-certificates/618620.pem
	I0908 12:26:02.860662  801176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/618620.pem
	I0908 12:26:02.867236  801176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/618620.pem /etc/ssl/certs/51391683.0"
	I0908 12:26:02.876505  801176 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 12:26:02.879814  801176 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0908 12:26:02.879863  801176 kubeadm.go:392] StartCluster: {Name:scheduled-stop-418489 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:scheduled-stop-418489 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:26:02.879937  801176 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0908 12:26:02.879987  801176 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 12:26:02.915660  801176 cri.go:89] found id: ""
	I0908 12:26:02.915730  801176 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 12:26:02.925119  801176 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0908 12:26:02.934626  801176 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0908 12:26:02.934672  801176 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0908 12:26:02.944161  801176 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0908 12:26:02.944173  801176 kubeadm.go:157] found existing configuration files:
	
	I0908 12:26:02.944225  801176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0908 12:26:02.953278  801176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0908 12:26:02.953349  801176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0908 12:26:02.962189  801176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0908 12:26:02.970942  801176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0908 12:26:02.971037  801176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0908 12:26:02.979751  801176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0908 12:26:02.989277  801176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0908 12:26:02.989340  801176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0908 12:26:02.998741  801176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0908 12:26:03.007987  801176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0908 12:26:03.008045  801176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0908 12:26:03.017051  801176 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0908 12:26:03.059146  801176 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0908 12:26:03.059204  801176 kubeadm.go:310] [preflight] Running pre-flight checks
	I0908 12:26:03.077387  801176 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0908 12:26:03.077466  801176 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1083-gcp
	I0908 12:26:03.077505  801176 kubeadm.go:310] OS: Linux
	I0908 12:26:03.077570  801176 kubeadm.go:310] CGROUPS_CPU: enabled
	I0908 12:26:03.077626  801176 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0908 12:26:03.077685  801176 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0908 12:26:03.077740  801176 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0908 12:26:03.077796  801176 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0908 12:26:03.077852  801176 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0908 12:26:03.077902  801176 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0908 12:26:03.077956  801176 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0908 12:26:03.078034  801176 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0908 12:26:03.137943  801176 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0908 12:26:03.138061  801176 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0908 12:26:03.138164  801176 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0908 12:26:03.146340  801176 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0908 12:26:03.148947  801176 out.go:252]   - Generating certificates and keys ...
	I0908 12:26:03.149043  801176 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0908 12:26:03.149127  801176 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0908 12:26:03.613320  801176 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0908 12:26:03.804388  801176 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0908 12:26:03.875198  801176 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0908 12:26:04.244779  801176 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0908 12:26:04.519830  801176 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0908 12:26:04.519979  801176 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost scheduled-stop-418489] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0908 12:26:04.932537  801176 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0908 12:26:04.932684  801176 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost scheduled-stop-418489] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0908 12:26:05.198318  801176 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0908 12:26:05.366223  801176 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0908 12:26:05.427137  801176 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0908 12:26:05.427226  801176 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0908 12:26:05.617717  801176 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0908 12:26:05.962855  801176 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0908 12:26:06.209562  801176 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0908 12:26:06.278008  801176 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0908 12:26:06.559962  801176 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0908 12:26:06.560403  801176 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0908 12:26:06.562891  801176 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0908 12:26:06.565863  801176 out.go:252]   - Booting up control plane ...
	I0908 12:26:06.565967  801176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0908 12:26:06.566078  801176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0908 12:26:06.566164  801176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0908 12:26:06.575203  801176 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0908 12:26:06.575325  801176 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0908 12:26:06.583100  801176 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0908 12:26:06.583789  801176 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0908 12:26:06.583855  801176 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0908 12:26:06.665548  801176 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0908 12:26:06.665651  801176 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0908 12:26:07.167391  801176 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.935597ms
	I0908 12:26:07.170405  801176 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0908 12:26:07.170502  801176 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I0908 12:26:07.170595  801176 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0908 12:26:07.170657  801176 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0908 12:26:09.507478  801176 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.337040245s
	I0908 12:26:10.499030  801176 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.328658909s
	I0908 12:26:12.172882  801176 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 5.002407404s
	I0908 12:26:12.186474  801176 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0908 12:26:12.197968  801176 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0908 12:26:12.208628  801176 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0908 12:26:12.208887  801176 kubeadm.go:310] [mark-control-plane] Marking the node scheduled-stop-418489 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0908 12:26:12.218484  801176 kubeadm.go:310] [bootstrap-token] Using token: zvg88z.eb0v8oy8f3wu3tri
	I0908 12:26:12.220812  801176 out.go:252]   - Configuring RBAC rules ...
	I0908 12:26:12.220984  801176 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0908 12:26:12.224314  801176 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0908 12:26:12.230586  801176 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0908 12:26:12.234568  801176 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0908 12:26:12.237485  801176 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0908 12:26:12.240619  801176 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0908 12:26:12.579395  801176 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0908 12:26:13.003626  801176 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0908 12:26:13.581436  801176 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0908 12:26:13.582294  801176 kubeadm.go:310] 
	I0908 12:26:13.582362  801176 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0908 12:26:13.582366  801176 kubeadm.go:310] 
	I0908 12:26:13.582435  801176 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0908 12:26:13.582438  801176 kubeadm.go:310] 
	I0908 12:26:13.582458  801176 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0908 12:26:13.582527  801176 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0908 12:26:13.582597  801176 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0908 12:26:13.582601  801176 kubeadm.go:310] 
	I0908 12:26:13.582649  801176 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0908 12:26:13.582652  801176 kubeadm.go:310] 
	I0908 12:26:13.582691  801176 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0908 12:26:13.582693  801176 kubeadm.go:310] 
	I0908 12:26:13.582786  801176 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0908 12:26:13.582869  801176 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0908 12:26:13.582937  801176 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0908 12:26:13.582942  801176 kubeadm.go:310] 
	I0908 12:26:13.583050  801176 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0908 12:26:13.583157  801176 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0908 12:26:13.583163  801176 kubeadm.go:310] 
	I0908 12:26:13.583281  801176 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token zvg88z.eb0v8oy8f3wu3tri \
	I0908 12:26:13.583445  801176 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7637fdf591f5caebb87eda0c466f59e2329c3b4b8f568d0c2721bf3a0b62aaf9 \
	I0908 12:26:13.583479  801176 kubeadm.go:310] 	--control-plane 
	I0908 12:26:13.583484  801176 kubeadm.go:310] 
	I0908 12:26:13.583602  801176 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0908 12:26:13.583607  801176 kubeadm.go:310] 
	I0908 12:26:13.583749  801176 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token zvg88z.eb0v8oy8f3wu3tri \
	I0908 12:26:13.583901  801176 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7637fdf591f5caebb87eda0c466f59e2329c3b4b8f568d0c2721bf3a0b62aaf9 
	I0908 12:26:13.586980  801176 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0908 12:26:13.587234  801176 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
	I0908 12:26:13.587362  801176 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0908 12:26:13.587413  801176 cni.go:84] Creating CNI manager for ""
	I0908 12:26:13.587426  801176 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 12:26:13.588797  801176 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0908 12:26:13.589846  801176 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0908 12:26:13.594050  801176 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0908 12:26:13.594065  801176 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0908 12:26:13.612057  801176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0908 12:26:13.829989  801176 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0908 12:26:13.830089  801176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 12:26:13.830099  801176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes scheduled-stop-418489 minikube.k8s.io/updated_at=2025_09_08T12_26_13_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=a399eb27affc71ce2737faeeac659fc2ce938c64 minikube.k8s.io/name=scheduled-stop-418489 minikube.k8s.io/primary=true
	I0908 12:26:13.838746  801176 ops.go:34] apiserver oom_adj: -16
	I0908 12:26:13.927807  801176 kubeadm.go:1105] duration metric: took 97.792006ms to wait for elevateKubeSystemPrivileges
	I0908 12:26:13.927915  801176 kubeadm.go:394] duration metric: took 11.048051372s to StartCluster
	I0908 12:26:13.927944  801176 settings.go:142] acquiring lock: {Name:mk9e6c1dbd6be0735fc1b1285e1ee30836d4ceb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:26:13.928035  801176 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21512-614854/kubeconfig
	I0908 12:26:13.928768  801176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/kubeconfig: {Name:mkdc8e576f612d76ffebaac15a9174cc9dea2917 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:26:13.928998  801176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0908 12:26:13.929005  801176 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 12:26:13.929099  801176 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0908 12:26:13.929207  801176 addons.go:69] Setting storage-provisioner=true in profile "scheduled-stop-418489"
	I0908 12:26:13.929223  801176 addons.go:238] Setting addon storage-provisioner=true in "scheduled-stop-418489"
	I0908 12:26:13.929231  801176 config.go:182] Loaded profile config "scheduled-stop-418489": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 12:26:13.929249  801176 host.go:66] Checking if "scheduled-stop-418489" exists ...
	I0908 12:26:13.929250  801176 addons.go:69] Setting default-storageclass=true in profile "scheduled-stop-418489"
	I0908 12:26:13.929278  801176 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "scheduled-stop-418489"
	I0908 12:26:13.929649  801176 cli_runner.go:164] Run: docker container inspect scheduled-stop-418489 --format={{.State.Status}}
	I0908 12:26:13.929762  801176 cli_runner.go:164] Run: docker container inspect scheduled-stop-418489 --format={{.State.Status}}
	I0908 12:26:13.931108  801176 out.go:179] * Verifying Kubernetes components...
	I0908 12:26:13.932401  801176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:26:13.953176  801176 addons.go:238] Setting addon default-storageclass=true in "scheduled-stop-418489"
	I0908 12:26:13.953205  801176 host.go:66] Checking if "scheduled-stop-418489" exists ...
	I0908 12:26:13.953227  801176 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 12:26:13.953531  801176 cli_runner.go:164] Run: docker container inspect scheduled-stop-418489 --format={{.State.Status}}
	I0908 12:26:13.954448  801176 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 12:26:13.954462  801176 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 12:26:13.954519  801176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-418489
	I0908 12:26:13.972926  801176 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 12:26:13.972941  801176 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 12:26:13.973001  801176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-418489
	I0908 12:26:13.974600  801176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33333 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/scheduled-stop-418489/id_rsa Username:docker}
	I0908 12:26:14.001428  801176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33333 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/scheduled-stop-418489/id_rsa Username:docker}
	I0908 12:26:14.185145  801176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0908 12:26:14.200140  801176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 12:26:14.200160  801176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 12:26:14.282410  801176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 12:26:14.615503  801176 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0908 12:26:14.804812  801176 api_server.go:52] waiting for apiserver process to appear ...
	I0908 12:26:14.804863  801176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 12:26:14.805616  801176 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I0908 12:26:14.806831  801176 addons.go:514] duration metric: took 877.735213ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0908 12:26:14.817907  801176 api_server.go:72] duration metric: took 888.862338ms to wait for apiserver process to appear ...
	I0908 12:26:14.817930  801176 api_server.go:88] waiting for apiserver healthz status ...
	I0908 12:26:14.817951  801176 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0908 12:26:14.823735  801176 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0908 12:26:14.825005  801176 api_server.go:141] control plane version: v1.34.0
	I0908 12:26:14.825035  801176 api_server.go:131] duration metric: took 7.097269ms to wait for apiserver health ...
	I0908 12:26:14.825043  801176 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 12:26:14.828584  801176 system_pods.go:59] 5 kube-system pods found
	I0908 12:26:14.828607  801176 system_pods.go:61] "etcd-scheduled-stop-418489" [9539d398-c76f-4be1-8289-4a4372c2baaa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 12:26:14.828615  801176 system_pods.go:61] "kube-apiserver-scheduled-stop-418489" [7732f850-ad87-4527-8ced-028ff27a5dce] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 12:26:14.828620  801176 system_pods.go:61] "kube-controller-manager-scheduled-stop-418489" [dd5ce9b0-140d-456b-85df-cb461a35e99f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 12:26:14.828625  801176 system_pods.go:61] "kube-scheduler-scheduled-stop-418489" [b848ca98-4cdf-4498-851d-4342a1fd15ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 12:26:14.828629  801176 system_pods.go:61] "storage-provisioner" [71bd588c-ff66-4e24-a5e8-35e7316405db] Pending
	I0908 12:26:14.828635  801176 system_pods.go:74] duration metric: took 3.586922ms to wait for pod list to return data ...
	I0908 12:26:14.828645  801176 kubeadm.go:578] duration metric: took 899.61408ms to wait for: map[apiserver:true system_pods:true]
	I0908 12:26:14.828657  801176 node_conditions.go:102] verifying NodePressure condition ...
	I0908 12:26:14.831751  801176 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0908 12:26:14.831771  801176 node_conditions.go:123] node cpu capacity is 8
	I0908 12:26:14.831785  801176 node_conditions.go:105] duration metric: took 3.124931ms to run NodePressure ...
	I0908 12:26:14.831798  801176 start.go:241] waiting for startup goroutines ...
	I0908 12:26:15.119601  801176 kapi.go:214] "coredns" deployment in "kube-system" namespace and "scheduled-stop-418489" context rescaled to 1 replicas
	I0908 12:26:15.119636  801176 start.go:246] waiting for cluster config update ...
	I0908 12:26:15.119648  801176 start.go:255] writing updated cluster config ...
	I0908 12:26:15.119977  801176 ssh_runner.go:195] Run: rm -f paused
	I0908 12:26:15.170601  801176 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0908 12:26:15.172456  801176 out.go:179] * Done! kubectl is now configured to use "scheduled-stop-418489" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 08 12:26:07 scheduled-stop-418489 crio[1062]: time="2025-09-08 12:26:07.438230385Z" level=info msg="Ran pod sandbox b95bda721aa470446685b01efd3532938faa8f1543fb6667e8b26a6cd3a1fc12 with infra container: kube-system/etcd-scheduled-stop-418489/POD" id=17f126d2-3b65-4574-87da-5fccfc1a1974 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 08 12:26:07 scheduled-stop-418489 crio[1062]: time="2025-09-08 12:26:07.439200273Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=6dfd0542-dc2c-474e-abeb-8119e53c183c name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:26:07 scheduled-stop-418489 crio[1062]: time="2025-09-08 12:26:07.439408288Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,RepoTags:[registry.k8s.io/etcd:3.6.4-0],RepoDigests:[registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19],Size_:195976448,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=6dfd0542-dc2c-474e-abeb-8119e53c183c name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:26:07 scheduled-stop-418489 crio[1062]: time="2025-09-08 12:26:07.440057965Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=976f70d5-394c-4bb6-bd77-60f6d5e42db1 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:26:07 scheduled-stop-418489 crio[1062]: time="2025-09-08 12:26:07.440222163Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,RepoTags:[registry.k8s.io/etcd:3.6.4-0],RepoDigests:[registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19],Size_:195976448,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=976f70d5-394c-4bb6-bd77-60f6d5e42db1 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:26:07 scheduled-stop-418489 crio[1062]: time="2025-09-08 12:26:07.440244568Z" level=info msg="Creating container: kube-system/kube-controller-manager-scheduled-stop-418489/kube-controller-manager" id=12d1e0fb-26cc-435d-81a2-74f9933c7d96 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 08 12:26:07 scheduled-stop-418489 crio[1062]: time="2025-09-08 12:26:07.440341877Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 08 12:26:07 scheduled-stop-418489 crio[1062]: time="2025-09-08 12:26:07.441481372Z" level=info msg="Creating container: kube-system/kube-scheduler-scheduled-stop-418489/kube-scheduler" id=b7df43e8-7e97-4c28-877c-c1f9c7ec8373 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 08 12:26:07 scheduled-stop-418489 crio[1062]: time="2025-09-08 12:26:07.441577147Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 08 12:26:07 scheduled-stop-418489 crio[1062]: time="2025-09-08 12:26:07.443546065Z" level=info msg="Creating container: kube-system/kube-apiserver-scheduled-stop-418489/kube-apiserver" id=a750ae17-dd93-4a40-a120-f037383234f0 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 08 12:26:07 scheduled-stop-418489 crio[1062]: time="2025-09-08 12:26:07.443642092Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 08 12:26:07 scheduled-stop-418489 crio[1062]: time="2025-09-08 12:26:07.445482477Z" level=info msg="Creating container: kube-system/etcd-scheduled-stop-418489/etcd" id=6f557c40-27f9-48cb-8307-f58c81f6c9d6 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 08 12:26:07 scheduled-stop-418489 crio[1062]: time="2025-09-08 12:26:07.445564841Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 08 12:26:07 scheduled-stop-418489 crio[1062]: time="2025-09-08 12:26:07.585666948Z" level=info msg="Created container 43ff9606c42bbcfd65321ccc1617a383f0972f53aa78ff54013de89a7fd76000: kube-system/kube-controller-manager-scheduled-stop-418489/kube-controller-manager" id=12d1e0fb-26cc-435d-81a2-74f9933c7d96 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 08 12:26:07 scheduled-stop-418489 crio[1062]: time="2025-09-08 12:26:07.586755467Z" level=info msg="Starting container: 43ff9606c42bbcfd65321ccc1617a383f0972f53aa78ff54013de89a7fd76000" id=60c236e6-f42a-4782-85cd-836891b867d7 name=/runtime.v1.RuntimeService/StartContainer
	Sep 08 12:26:07 scheduled-stop-418489 crio[1062]: time="2025-09-08 12:26:07.591783078Z" level=info msg="Created container 0f9953ba894c9d0e21e253267cebbc54f5d18e7e42e310254808566c1ba48e67: kube-system/kube-apiserver-scheduled-stop-418489/kube-apiserver" id=a750ae17-dd93-4a40-a120-f037383234f0 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 08 12:26:07 scheduled-stop-418489 crio[1062]: time="2025-09-08 12:26:07.592251836Z" level=info msg="Created container a13620429ff6d78f90efac37e2edaa1db8f906416d0ec8eacfc17af8558dfbf6: kube-system/etcd-scheduled-stop-418489/etcd" id=6f557c40-27f9-48cb-8307-f58c81f6c9d6 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 08 12:26:07 scheduled-stop-418489 crio[1062]: time="2025-09-08 12:26:07.592539235Z" level=info msg="Starting container: 0f9953ba894c9d0e21e253267cebbc54f5d18e7e42e310254808566c1ba48e67" id=838f0026-b205-4397-bcc4-748ad60fac03 name=/runtime.v1.RuntimeService/StartContainer
	Sep 08 12:26:07 scheduled-stop-418489 crio[1062]: time="2025-09-08 12:26:07.592738342Z" level=info msg="Starting container: a13620429ff6d78f90efac37e2edaa1db8f906416d0ec8eacfc17af8558dfbf6" id=79bc4e32-df53-4088-9551-7a09e1a5646c name=/runtime.v1.RuntimeService/StartContainer
	Sep 08 12:26:07 scheduled-stop-418489 crio[1062]: time="2025-09-08 12:26:07.595008206Z" level=info msg="Started container" PID=1504 containerID=43ff9606c42bbcfd65321ccc1617a383f0972f53aa78ff54013de89a7fd76000 description=kube-system/kube-controller-manager-scheduled-stop-418489/kube-controller-manager id=60c236e6-f42a-4782-85cd-836891b867d7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1574469f68080c44712efeb5ada4bcbbbae0a37b8f10cda7d4f76ac2e185c095
	Sep 08 12:26:07 scheduled-stop-418489 crio[1062]: time="2025-09-08 12:26:07.600414938Z" level=info msg="Started container" PID=1546 containerID=0f9953ba894c9d0e21e253267cebbc54f5d18e7e42e310254808566c1ba48e67 description=kube-system/kube-apiserver-scheduled-stop-418489/kube-apiserver id=838f0026-b205-4397-bcc4-748ad60fac03 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9fb8b55d2bbb1b6b92612661aefba1d4c980608ea90692e6bb20392e853a9e19
	Sep 08 12:26:07 scheduled-stop-418489 crio[1062]: time="2025-09-08 12:26:07.601397115Z" level=info msg="Created container 741b6992c32196c27f12c554acd31083943886181020ff62ba51acbbd3b9619d: kube-system/kube-scheduler-scheduled-stop-418489/kube-scheduler" id=b7df43e8-7e97-4c28-877c-c1f9c7ec8373 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 08 12:26:07 scheduled-stop-418489 crio[1062]: time="2025-09-08 12:26:07.602099291Z" level=info msg="Starting container: 741b6992c32196c27f12c554acd31083943886181020ff62ba51acbbd3b9619d" id=a5b1f66a-7d9e-41a1-80ee-79a95b020307 name=/runtime.v1.RuntimeService/StartContainer
	Sep 08 12:26:07 scheduled-stop-418489 crio[1062]: time="2025-09-08 12:26:07.604526881Z" level=info msg="Started container" PID=1544 containerID=a13620429ff6d78f90efac37e2edaa1db8f906416d0ec8eacfc17af8558dfbf6 description=kube-system/etcd-scheduled-stop-418489/etcd id=79bc4e32-df53-4088-9551-7a09e1a5646c name=/runtime.v1.RuntimeService/StartContainer sandboxID=b95bda721aa470446685b01efd3532938faa8f1543fb6667e8b26a6cd3a1fc12
	Sep 08 12:26:07 scheduled-stop-418489 crio[1062]: time="2025-09-08 12:26:07.610718558Z" level=info msg="Started container" PID=1560 containerID=741b6992c32196c27f12c554acd31083943886181020ff62ba51acbbd3b9619d description=kube-system/kube-scheduler-scheduled-stop-418489/kube-scheduler id=a5b1f66a-7d9e-41a1-80ee-79a95b020307 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f715849ed6823afde80a2667d791baf5858f6f8265844a46852682e807bcd064
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	741b6992c3219       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   9 seconds ago       Running             kube-scheduler            0                   f715849ed6823       kube-scheduler-scheduled-stop-418489
	0f9953ba894c9       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   9 seconds ago       Running             kube-apiserver            0                   9fb8b55d2bbb1       kube-apiserver-scheduled-stop-418489
	a13620429ff6d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   9 seconds ago       Running             etcd                      0                   b95bda721aa47       etcd-scheduled-stop-418489
	43ff9606c42bb       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   9 seconds ago       Running             kube-controller-manager   0                   1574469f68080       kube-controller-manager-scheduled-stop-418489
	
	
	==> describe nodes <==
	Name:               scheduled-stop-418489
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=scheduled-stop-418489
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a399eb27affc71ce2737faeeac659fc2ce938c64
	                    minikube.k8s.io/name=scheduled-stop-418489
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T12_26_13_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 12:26:10 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  scheduled-stop-418489
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 12:26:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 12:26:13 +0000   Mon, 08 Sep 2025 12:26:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 12:26:13 +0000   Mon, 08 Sep 2025 12:26:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 12:26:13 +0000   Mon, 08 Sep 2025 12:26:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 08 Sep 2025 12:26:13 +0000   Mon, 08 Sep 2025 12:26:08 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    scheduled-stop-418489
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	System Info:
	  Machine ID:                 4eccc0217d8f4af8812305364243b604
	  System UUID:                d99ede3c-9e18-49ba-a77c-ac958c35f7ad
	  Boot ID:                    1bb31c1a-3b78-4ea1-9977-d0689f279875
	  Kernel Version:             5.15.0-1083-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                             ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-scheduled-stop-418489                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         3s
	  kube-system                 kube-apiserver-scheduled-stop-418489             250m (3%)     0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-controller-manager-scheduled-stop-418489    200m (2%)     0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-scheduled-stop-418489             100m (1%)     0 (0%)      0 (0%)           0 (0%)         3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (8%)   0 (0%)
	  memory             100Mi (0%)  0 (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age               From     Message
	  ----     ------                   ----              ----     -------
	  Normal   Starting                 10s               kubelet  Starting kubelet.
	  Warning  CgroupV1                 10s               kubelet  cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9s (x8 over 10s)  kubelet  Node scheduled-stop-418489 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x8 over 10s)  kubelet  Node scheduled-stop-418489 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x8 over 10s)  kubelet  Node scheduled-stop-418489 status is now: NodeHasSufficientPID
	  Normal   Starting                 4s                kubelet  Starting kubelet.
	  Warning  CgroupV1                 4s                kubelet  cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  3s                kubelet  Node scheduled-stop-418489 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3s                kubelet  Node scheduled-stop-418489 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3s                kubelet  Node scheduled-stop-418489 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[  +0.000001] ll header: 00000000: 5e f1 f1 05 f9 70 66 83 29 4a 07 67 08 00
	[Sep 8 12:25] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2e0158c39227
	[  +0.000010] ll header: 00000000: 2e 84 b5 37 ae 96 8a 0b e5 a9 de c4 08 00
	[  +0.000035] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2e0158c39227
	[  +0.000002] ll header: 00000000: 2e 84 b5 37 ae 96 8a 0b e5 a9 de c4 08 00
	[  +0.000089] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2e0158c39227
	[  +0.000003] ll header: 00000000: 2e 84 b5 37 ae 96 8a 0b e5 a9 de c4 08 00
	[  +1.013479] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2e0158c39227
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2e0158c39227
	[  +0.000004] ll header: 00000000: 2e 84 b5 37 ae 96 8a 0b e5 a9 de c4 08 00
	[  +0.000000] ll header: 00000000: 2e 84 b5 37 ae 96 8a 0b e5 a9 de c4 08 00
	[  +2.015836] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2e0158c39227
	[  +0.000008] ll header: 00000000: 2e 84 b5 37 ae 96 8a 0b e5 a9 de c4 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2e0158c39227
	[  +0.000001] ll header: 00000000: 2e 84 b5 37 ae 96 8a 0b e5 a9 de c4 08 00
	[  +4.063685] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2e0158c39227
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2e0158c39227
	[  +0.000004] ll header: 00000000: 2e 84 b5 37 ae 96 8a 0b e5 a9 de c4 08 00
	[  +0.000024] ll header: 00000000: 2e 84 b5 37 ae 96 8a 0b e5 a9 de c4 08 00
	[  +8.191205] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2e0158c39227
	[  +0.000023] ll header: 00000000: 2e 84 b5 37 ae 96 8a 0b e5 a9 de c4 08 00
	[  +0.003991] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2e0158c39227
	[  +0.000004] ll header: 00000000: 2e 84 b5 37 ae 96 8a 0b e5 a9 de c4 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2e0158c39227
	[  +0.000001] ll header: 00000000: 2e 84 b5 37 ae 96 8a 0b e5 a9 de c4 08 00
	
	
	==> etcd [a13620429ff6d78f90efac37e2edaa1db8f906416d0ec8eacfc17af8558dfbf6] <==
	{"level":"warn","ts":"2025-09-08T12:26:09.427084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:26:09.480774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:26:09.498579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:26:09.506390Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:26:09.513181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:26:09.576112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:26:09.580235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:26:09.586837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:26:09.594128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:26:09.601419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:26:09.627584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:26:09.635418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:26:09.642365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:26:09.675915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:26:09.684335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:26:09.691188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:26:09.698340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:26:09.705150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:26:09.712347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:26:09.723204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:26:09.756038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:26:09.759612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:26:09.766015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:26:09.772619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:26:09.826090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50356","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 12:26:16 up  3:08,  0 users,  load average: 0.59, 0.87, 1.14
	Linux scheduled-stop-418489 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [0f9953ba894c9d0e21e253267cebbc54f5d18e7e42e310254808566c1ba48e67] <==
	I0908 12:26:10.479953       1 cache.go:39] Caches are synced for autoregister controller
	I0908 12:26:10.483276       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0908 12:26:10.492838       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0908 12:26:10.492876       1 policy_source.go:240] refreshing policies
	E0908 12:26:10.537151       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	E0908 12:26:10.544156       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0908 12:26:10.582894       1 controller.go:667] quota admission added evaluator for: namespaces
	I0908 12:26:10.587506       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0908 12:26:10.588073       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I0908 12:26:10.595053       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0908 12:26:10.595739       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I0908 12:26:10.748209       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0908 12:26:11.327351       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0908 12:26:11.331387       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0908 12:26:11.331405       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0908 12:26:12.040413       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0908 12:26:12.088904       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0908 12:26:12.188307       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0908 12:26:12.194717       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0908 12:26:12.195939       1 controller.go:667] quota admission added evaluator for: endpoints
	I0908 12:26:12.201394       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0908 12:26:12.390722       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0908 12:26:12.988727       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0908 12:26:13.000893       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0908 12:26:13.011199       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [43ff9606c42bbcfd65321ccc1617a383f0972f53aa78ff54013de89a7fd76000] <==
	I0908 12:26:16.694981       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="resourceclaimtemplates.resource.k8s.io"
	I0908 12:26:16.695093       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0908 12:26:16.695132       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0908 12:26:16.695158       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0908 12:26:16.695187       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0908 12:26:16.695223       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0908 12:26:16.695253       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0908 12:26:16.695280       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0908 12:26:16.695338       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0908 12:26:16.695386       1 shared_informer.go:682] "Warning: resync period is smaller than resync check period and the informer has already started. Changing it to the resync check period" resyncPeriod="19h36m30.000878815s" resyncCheckPeriod="20h1m24.410360063s"
	I0908 12:26:16.695443       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0908 12:26:16.695470       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0908 12:26:16.695501       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0908 12:26:16.695536       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0908 12:26:16.695561       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0908 12:26:16.695583       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0908 12:26:16.695619       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0908 12:26:16.695643       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0908 12:26:16.695803       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0908 12:26:16.695884       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0908 12:26:16.695936       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0908 12:26:16.695982       1 controllermanager.go:781] "Started controller" controller="resourcequota-controller"
	I0908 12:26:16.696067       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I0908 12:26:16.696092       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I0908 12:26:16.696143       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	
	
	==> kube-scheduler [741b6992c32196c27f12c554acd31083943886181020ff62ba51acbbd3b9619d] <==
	E0908 12:26:10.497822       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0908 12:26:10.497826       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0908 12:26:10.497833       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0908 12:26:10.497903       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0908 12:26:10.498005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0908 12:26:10.497925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0908 12:26:10.497945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0908 12:26:10.497936       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0908 12:26:10.498089       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0908 12:26:10.498173       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0908 12:26:10.498188       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0908 12:26:10.498414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0908 12:26:11.381842       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0908 12:26:11.441322       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0908 12:26:11.477507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0908 12:26:11.477575       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0908 12:26:11.497705       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0908 12:26:11.524059       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0908 12:26:11.577976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0908 12:26:11.644628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0908 12:26:11.646508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0908 12:26:11.660860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0908 12:26:11.676527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0908 12:26:11.776952       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I0908 12:26:14.694457       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 08 12:26:13 scheduled-stop-418489 kubelet[1704]: I0908 12:26:13.280288    1704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f80aaa81a2f3ee65873483839f8ba08c-etc-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-418489\" (UID: \"f80aaa81a2f3ee65873483839f8ba08c\") " pod="kube-system/kube-controller-manager-scheduled-stop-418489"
	Sep 08 12:26:13 scheduled-stop-418489 kubelet[1704]: I0908 12:26:13.280314    1704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f80aaa81a2f3ee65873483839f8ba08c-usr-local-share-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-418489\" (UID: \"f80aaa81a2f3ee65873483839f8ba08c\") " pod="kube-system/kube-controller-manager-scheduled-stop-418489"
	Sep 08 12:26:13 scheduled-stop-418489 kubelet[1704]: I0908 12:26:13.280384    1704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f80aaa81a2f3ee65873483839f8ba08c-usr-share-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-418489\" (UID: \"f80aaa81a2f3ee65873483839f8ba08c\") " pod="kube-system/kube-controller-manager-scheduled-stop-418489"
	Sep 08 12:26:13 scheduled-stop-418489 kubelet[1704]: I0908 12:26:13.280439    1704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/f5cca96765a6d69b41a92a6810f53050-etcd-certs\") pod \"etcd-scheduled-stop-418489\" (UID: \"f5cca96765a6d69b41a92a6810f53050\") " pod="kube-system/etcd-scheduled-stop-418489"
	Sep 08 12:26:13 scheduled-stop-418489 kubelet[1704]: I0908 12:26:13.280468    1704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/736fccc70f4f926aa79b54e3922834a2-etc-ca-certificates\") pod \"kube-apiserver-scheduled-stop-418489\" (UID: \"736fccc70f4f926aa79b54e3922834a2\") " pod="kube-system/kube-apiserver-scheduled-stop-418489"
	Sep 08 12:26:13 scheduled-stop-418489 kubelet[1704]: I0908 12:26:13.280492    1704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/736fccc70f4f926aa79b54e3922834a2-usr-local-share-ca-certificates\") pod \"kube-apiserver-scheduled-stop-418489\" (UID: \"736fccc70f4f926aa79b54e3922834a2\") " pod="kube-system/kube-apiserver-scheduled-stop-418489"
	Sep 08 12:26:13 scheduled-stop-418489 kubelet[1704]: I0908 12:26:13.280517    1704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f80aaa81a2f3ee65873483839f8ba08c-kubeconfig\") pod \"kube-controller-manager-scheduled-stop-418489\" (UID: \"f80aaa81a2f3ee65873483839f8ba08c\") " pod="kube-system/kube-controller-manager-scheduled-stop-418489"
	Sep 08 12:26:13 scheduled-stop-418489 kubelet[1704]: I0908 12:26:13.280542    1704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f80aaa81a2f3ee65873483839f8ba08c-ca-certs\") pod \"kube-controller-manager-scheduled-stop-418489\" (UID: \"f80aaa81a2f3ee65873483839f8ba08c\") " pod="kube-system/kube-controller-manager-scheduled-stop-418489"
	Sep 08 12:26:13 scheduled-stop-418489 kubelet[1704]: I0908 12:26:13.280583    1704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f80aaa81a2f3ee65873483839f8ba08c-flexvolume-dir\") pod \"kube-controller-manager-scheduled-stop-418489\" (UID: \"f80aaa81a2f3ee65873483839f8ba08c\") " pod="kube-system/kube-controller-manager-scheduled-stop-418489"
	Sep 08 12:26:13 scheduled-stop-418489 kubelet[1704]: I0908 12:26:13.280617    1704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f80aaa81a2f3ee65873483839f8ba08c-k8s-certs\") pod \"kube-controller-manager-scheduled-stop-418489\" (UID: \"f80aaa81a2f3ee65873483839f8ba08c\") " pod="kube-system/kube-controller-manager-scheduled-stop-418489"
	Sep 08 12:26:13 scheduled-stop-418489 kubelet[1704]: I0908 12:26:13.280645    1704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/527f9a5be83e33abcbe11737210d9e5a-kubeconfig\") pod \"kube-scheduler-scheduled-stop-418489\" (UID: \"527f9a5be83e33abcbe11737210d9e5a\") " pod="kube-system/kube-scheduler-scheduled-stop-418489"
	Sep 08 12:26:13 scheduled-stop-418489 kubelet[1704]: I0908 12:26:13.833621    1704 apiserver.go:52] "Watching apiserver"
	Sep 08 12:26:13 scheduled-stop-418489 kubelet[1704]: I0908 12:26:13.877255    1704 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 08 12:26:13 scheduled-stop-418489 kubelet[1704]: I0908 12:26:13.909345    1704 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-scheduled-stop-418489"
	Sep 08 12:26:13 scheduled-stop-418489 kubelet[1704]: I0908 12:26:13.909462    1704 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-scheduled-stop-418489"
	Sep 08 12:26:13 scheduled-stop-418489 kubelet[1704]: I0908 12:26:13.909579    1704 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-scheduled-stop-418489"
	Sep 08 12:26:13 scheduled-stop-418489 kubelet[1704]: I0908 12:26:13.909692    1704 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-scheduled-stop-418489"
	Sep 08 12:26:13 scheduled-stop-418489 kubelet[1704]: E0908 12:26:13.917993    1704 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-scheduled-stop-418489\" already exists" pod="kube-system/kube-scheduler-scheduled-stop-418489"
	Sep 08 12:26:13 scheduled-stop-418489 kubelet[1704]: E0908 12:26:13.918222    1704 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-scheduled-stop-418489\" already exists" pod="kube-system/etcd-scheduled-stop-418489"
	Sep 08 12:26:13 scheduled-stop-418489 kubelet[1704]: E0908 12:26:13.918431    1704 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-scheduled-stop-418489\" already exists" pod="kube-system/kube-apiserver-scheduled-stop-418489"
	Sep 08 12:26:13 scheduled-stop-418489 kubelet[1704]: E0908 12:26:13.920288    1704 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-scheduled-stop-418489\" already exists" pod="kube-system/kube-controller-manager-scheduled-stop-418489"
	Sep 08 12:26:13 scheduled-stop-418489 kubelet[1704]: I0908 12:26:13.931549    1704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-scheduled-stop-418489" podStartSLOduration=0.931528091 podStartE2EDuration="931.528091ms" podCreationTimestamp="2025-09-08 12:26:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 12:26:13.931483058 +0000 UTC m=+1.168632624" watchObservedRunningTime="2025-09-08 12:26:13.931528091 +0000 UTC m=+1.168677657"
	Sep 08 12:26:13 scheduled-stop-418489 kubelet[1704]: I0908 12:26:13.998724    1704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-scheduled-stop-418489" podStartSLOduration=0.998696963 podStartE2EDuration="998.696963ms" podCreationTimestamp="2025-09-08 12:26:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 12:26:13.987105767 +0000 UTC m=+1.224255349" watchObservedRunningTime="2025-09-08 12:26:13.998696963 +0000 UTC m=+1.235846528"
	Sep 08 12:26:14 scheduled-stop-418489 kubelet[1704]: I0908 12:26:14.012338    1704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-scheduled-stop-418489" podStartSLOduration=1.012312483 podStartE2EDuration="1.012312483s" podCreationTimestamp="2025-09-08 12:26:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 12:26:14.000362163 +0000 UTC m=+1.237511751" watchObservedRunningTime="2025-09-08 12:26:14.012312483 +0000 UTC m=+1.249462032"
	Sep 08 12:26:14 scheduled-stop-418489 kubelet[1704]: I0908 12:26:14.088318    1704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-scheduled-stop-418489" podStartSLOduration=1.088290781 podStartE2EDuration="1.088290781s" podCreationTimestamp="2025-09-08 12:26:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 12:26:14.013221774 +0000 UTC m=+1.250371339" watchObservedRunningTime="2025-09-08 12:26:14.088290781 +0000 UTC m=+1.325440348"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p scheduled-stop-418489 -n scheduled-stop-418489
helpers_test.go:269: (dbg) Run:  kubectl --context scheduled-stop-418489 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: storage-provisioner
helpers_test.go:282: ======> post-mortem[TestScheduledStopUnix]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context scheduled-stop-418489 describe pod storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context scheduled-stop-418489 describe pod storage-provisioner: exit status 1 (117.656751ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context scheduled-stop-418489 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "scheduled-stop-418489" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-418489
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-418489: (1.861402856s)
--- FAIL: TestScheduledStopUnix (26.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (929.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-283124 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-283124 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: exit status 80 (15m29.514777706s)

                                                
                                                
-- stdout --
	* [calico-283124] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21512-614854/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-614854/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "calico-283124" primary control-plane node in "calico-283124" cluster
	* Pulling base image v0.0.47-1756980985-21488 ...
	* Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 12:31:06.807307  876341 out.go:360] Setting OutFile to fd 1 ...
	I0908 12:31:06.807573  876341 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:31:06.807584  876341 out.go:374] Setting ErrFile to fd 2...
	I0908 12:31:06.807587  876341 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:31:06.808554  876341 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-614854/.minikube/bin
	I0908 12:31:06.809789  876341 out.go:368] Setting JSON to false
	I0908 12:31:06.811311  876341 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":11611,"bootTime":1757323056,"procs":280,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 12:31:06.811456  876341 start.go:140] virtualization: kvm guest
	I0908 12:31:06.813578  876341 out.go:179] * [calico-283124] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 12:31:06.815297  876341 notify.go:220] Checking for updates...
	I0908 12:31:06.815307  876341 out.go:179]   - MINIKUBE_LOCATION=21512
	I0908 12:31:06.817136  876341 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 12:31:06.818661  876341 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21512-614854/kubeconfig
	I0908 12:31:06.820149  876341 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-614854/.minikube
	I0908 12:31:06.821582  876341 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 12:31:06.823026  876341 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 12:31:06.825077  876341 config.go:182] Loaded profile config "cert-expiration-310765": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 12:31:06.825196  876341 config.go:182] Loaded profile config "kindnet-283124": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 12:31:06.825315  876341 config.go:182] Loaded profile config "kubernetes-upgrade-770876": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 12:31:06.825426  876341 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 12:31:06.851537  876341 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 12:31:06.851762  876341 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:31:06.906816  876341 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-09-08 12:31:06.895663577 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 12:31:06.906943  876341 docker.go:318] overlay module found
	I0908 12:31:06.910042  876341 out.go:179] * Using the docker driver based on user configuration
	I0908 12:31:06.911713  876341 start.go:304] selected driver: docker
	I0908 12:31:06.911742  876341 start.go:918] validating driver "docker" against <nil>
	I0908 12:31:06.911761  876341 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 12:31:06.912826  876341 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:31:06.965882  876341 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-09-08 12:31:06.955981687 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 12:31:06.966086  876341 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 12:31:06.966326  876341 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 12:31:06.968141  876341 out.go:179] * Using Docker driver with root privileges
	I0908 12:31:06.969584  876341 cni.go:84] Creating CNI manager for "calico"
	I0908 12:31:06.969618  876341 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I0908 12:31:06.969741  876341 start.go:348] cluster config:
	{Name:calico-283124 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-283124 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:31:06.971428  876341 out.go:179] * Starting "calico-283124" primary control-plane node in "calico-283124" cluster
	I0908 12:31:06.972813  876341 cache.go:123] Beginning downloading kic base image for docker with crio
	I0908 12:31:06.974321  876341 out.go:179] * Pulling base image v0.0.47-1756980985-21488 ...
	I0908 12:31:06.975707  876341 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 12:31:06.975774  876341 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21512-614854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0908 12:31:06.975790  876341 cache.go:58] Caching tarball of preloaded images
	I0908 12:31:06.975826  876341 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 12:31:06.975909  876341 preload.go:172] Found /home/jenkins/minikube-integration/21512-614854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0908 12:31:06.975926  876341 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0908 12:31:06.976062  876341 profile.go:143] Saving config to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/calico-283124/config.json ...
	I0908 12:31:06.976114  876341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/calico-283124/config.json: {Name:mk499891d3a001a3ecef6da3fc64ed4720eb14ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:31:06.999976  876341 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon, skipping pull
	I0908 12:31:07.000003  876341 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in daemon, skipping load
	I0908 12:31:07.000027  876341 cache.go:232] Successfully downloaded all kic artifacts
	I0908 12:31:07.000063  876341 start.go:360] acquireMachinesLock for calico-283124: {Name:mkb64a64b1d25dfc5bacb17e8eb8d3306d20f70f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 12:31:07.000188  876341 start.go:364] duration metric: took 98.888µs to acquireMachinesLock for "calico-283124"
	I0908 12:31:07.000221  876341 start.go:93] Provisioning new machine with config: &{Name:calico-283124 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-283124 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 12:31:07.000322  876341 start.go:125] createHost starting for "" (driver="docker")
	I0908 12:31:07.002556  876341 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0908 12:31:07.002949  876341 start.go:159] libmachine.API.Create for "calico-283124" (driver="docker")
	I0908 12:31:07.003015  876341 client.go:168] LocalClient.Create starting
	I0908 12:31:07.003134  876341 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem
	I0908 12:31:07.003184  876341 main.go:141] libmachine: Decoding PEM data...
	I0908 12:31:07.003211  876341 main.go:141] libmachine: Parsing certificate...
	I0908 12:31:07.003287  876341 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21512-614854/.minikube/certs/cert.pem
	I0908 12:31:07.003322  876341 main.go:141] libmachine: Decoding PEM data...
	I0908 12:31:07.003339  876341 main.go:141] libmachine: Parsing certificate...
	I0908 12:31:07.003872  876341 cli_runner.go:164] Run: docker network inspect calico-283124 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0908 12:31:07.024658  876341 cli_runner.go:211] docker network inspect calico-283124 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0908 12:31:07.024769  876341 network_create.go:284] running [docker network inspect calico-283124] to gather additional debugging logs...
	I0908 12:31:07.024793  876341 cli_runner.go:164] Run: docker network inspect calico-283124
	W0908 12:31:07.046895  876341 cli_runner.go:211] docker network inspect calico-283124 returned with exit code 1
	I0908 12:31:07.046929  876341 network_create.go:287] error running [docker network inspect calico-283124]: docker network inspect calico-283124: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-283124 not found
	I0908 12:31:07.046952  876341 network_create.go:289] output of [docker network inspect calico-283124]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-283124 not found
	
	** /stderr **
	I0908 12:31:07.047089  876341 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 12:31:07.069777  876341 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a42c506aba4a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8e:58:c9:24:f1:2c} reservation:<nil>}
	I0908 12:31:07.070659  876341 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-05a0e362ea2b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:9a:f7:2e:f5:bc} reservation:<nil>}
	I0908 12:31:07.071438  876341 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3274e51da707 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5e:f1:f1:05:f9:70} reservation:<nil>}
	I0908 12:31:07.072633  876341 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e66d60}
	I0908 12:31:07.072663  876341 network_create.go:124] attempt to create docker network calico-283124 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0908 12:31:07.072713  876341 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-283124 calico-283124
	I0908 12:31:07.143719  876341 network_create.go:108] docker network calico-283124 192.168.76.0/24 created
	I0908 12:31:07.143771  876341 kic.go:121] calculated static IP "192.168.76.2" for the "calico-283124" container
	I0908 12:31:07.143835  876341 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0908 12:31:07.163924  876341 cli_runner.go:164] Run: docker volume create calico-283124 --label name.minikube.sigs.k8s.io=calico-283124 --label created_by.minikube.sigs.k8s.io=true
	I0908 12:31:07.183696  876341 oci.go:103] Successfully created a docker volume calico-283124
	I0908 12:31:07.183788  876341 cli_runner.go:164] Run: docker run --rm --name calico-283124-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-283124 --entrypoint /usr/bin/test -v calico-283124:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -d /var/lib
	I0908 12:31:07.700821  876341 oci.go:107] Successfully prepared a docker volume calico-283124
	I0908 12:31:07.700884  876341 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 12:31:07.700913  876341 kic.go:194] Starting extracting preloaded images to volume ...
	I0908 12:31:07.701029  876341 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21512-614854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-283124:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0908 12:31:12.374023  876341 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21512-614854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-283124:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -I lz4 -xf /preloaded.tar -C /extractDir: (4.672935434s)
	I0908 12:31:12.374070  876341 kic.go:203] duration metric: took 4.673151402s to extract preloaded images to volume ...
	W0908 12:31:12.374225  876341 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0908 12:31:12.374368  876341 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0908 12:31:12.425315  876341 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-283124 --name calico-283124 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-283124 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-283124 --network calico-283124 --ip 192.168.76.2 --volume calico-283124:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79
	I0908 12:31:12.720816  876341 cli_runner.go:164] Run: docker container inspect calico-283124 --format={{.State.Running}}
	I0908 12:31:12.740266  876341 cli_runner.go:164] Run: docker container inspect calico-283124 --format={{.State.Status}}
	I0908 12:31:12.762200  876341 cli_runner.go:164] Run: docker exec calico-283124 stat /var/lib/dpkg/alternatives/iptables
	I0908 12:31:12.813472  876341 oci.go:144] the created container "calico-283124" has a running status.
	I0908 12:31:12.813507  876341 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21512-614854/.minikube/machines/calico-283124/id_rsa...
	I0908 12:31:13.263879  876341 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21512-614854/.minikube/machines/calico-283124/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0908 12:31:13.288960  876341 cli_runner.go:164] Run: docker container inspect calico-283124 --format={{.State.Status}}
	I0908 12:31:13.310168  876341 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0908 12:31:13.310198  876341 kic_runner.go:114] Args: [docker exec --privileged calico-283124 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0908 12:31:13.386999  876341 cli_runner.go:164] Run: docker container inspect calico-283124 --format={{.State.Status}}
	I0908 12:31:13.406971  876341 machine.go:93] provisionDockerMachine start ...
	I0908 12:31:13.407109  876341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-283124
	I0908 12:31:13.428904  876341 main.go:141] libmachine: Using SSH client type: native
	I0908 12:31:13.429210  876341 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I0908 12:31:13.429235  876341 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 12:31:13.555975  876341 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-283124
	
	I0908 12:31:13.556019  876341 ubuntu.go:182] provisioning hostname "calico-283124"
	I0908 12:31:13.556083  876341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-283124
	I0908 12:31:13.575978  876341 main.go:141] libmachine: Using SSH client type: native
	I0908 12:31:13.576312  876341 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I0908 12:31:13.576335  876341 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-283124 && echo "calico-283124" | sudo tee /etc/hostname
	I0908 12:31:13.713037  876341 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-283124
	
	I0908 12:31:13.713116  876341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-283124
	I0908 12:31:13.731809  876341 main.go:141] libmachine: Using SSH client type: native
	I0908 12:31:13.732053  876341 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I0908 12:31:13.732073  876341 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-283124' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-283124/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-283124' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 12:31:13.852609  876341 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 12:31:13.852649  876341 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21512-614854/.minikube CaCertPath:/home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21512-614854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21512-614854/.minikube}
	I0908 12:31:13.852673  876341 ubuntu.go:190] setting up certificates
	I0908 12:31:13.852688  876341 provision.go:84] configureAuth start
	I0908 12:31:13.852753  876341 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-283124
	I0908 12:31:13.871401  876341 provision.go:143] copyHostCerts
	I0908 12:31:13.871489  876341 exec_runner.go:144] found /home/jenkins/minikube-integration/21512-614854/.minikube/key.pem, removing ...
	I0908 12:31:13.871503  876341 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21512-614854/.minikube/key.pem
	I0908 12:31:13.871588  876341 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21512-614854/.minikube/key.pem (1675 bytes)
	I0908 12:31:13.871760  876341 exec_runner.go:144] found /home/jenkins/minikube-integration/21512-614854/.minikube/ca.pem, removing ...
	I0908 12:31:13.871774  876341 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21512-614854/.minikube/ca.pem
	I0908 12:31:13.871808  876341 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21512-614854/.minikube/ca.pem (1082 bytes)
	I0908 12:31:13.871890  876341 exec_runner.go:144] found /home/jenkins/minikube-integration/21512-614854/.minikube/cert.pem, removing ...
	I0908 12:31:13.871903  876341 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21512-614854/.minikube/cert.pem
	I0908 12:31:13.871938  876341 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21512-614854/.minikube/cert.pem (1123 bytes)
	I0908 12:31:13.872010  876341 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21512-614854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca-key.pem org=jenkins.calico-283124 san=[127.0.0.1 192.168.76.2 calico-283124 localhost minikube]
	I0908 12:31:14.510830  876341 provision.go:177] copyRemoteCerts
	I0908 12:31:14.510913  876341 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 12:31:14.510955  876341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-283124
	I0908 12:31:14.533383  876341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/calico-283124/id_rsa Username:docker}
	I0908 12:31:14.625972  876341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 12:31:14.652836  876341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0908 12:31:14.678560  876341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0908 12:31:14.704574  876341 provision.go:87] duration metric: took 851.865737ms to configureAuth
	I0908 12:31:14.704622  876341 ubuntu.go:206] setting minikube options for container-runtime
	I0908 12:31:14.704839  876341 config.go:182] Loaded profile config "calico-283124": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 12:31:14.704963  876341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-283124
	I0908 12:31:14.725439  876341 main.go:141] libmachine: Using SSH client type: native
	I0908 12:31:14.725712  876341 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I0908 12:31:14.725730  876341 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0908 12:31:14.948837  876341 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0908 12:31:14.948872  876341 machine.go:96] duration metric: took 1.541871601s to provisionDockerMachine
	I0908 12:31:14.948884  876341 client.go:171] duration metric: took 7.945858908s to LocalClient.Create
	I0908 12:31:14.948911  876341 start.go:167] duration metric: took 7.945968198s to libmachine.API.Create "calico-283124"
	I0908 12:31:14.948921  876341 start.go:293] postStartSetup for "calico-283124" (driver="docker")
	I0908 12:31:14.948934  876341 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 12:31:14.949038  876341 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 12:31:14.949107  876341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-283124
	I0908 12:31:14.969887  876341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/calico-283124/id_rsa Username:docker}
	I0908 12:31:15.062452  876341 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 12:31:15.066397  876341 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0908 12:31:15.066432  876341 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0908 12:31:15.066441  876341 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0908 12:31:15.066448  876341 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0908 12:31:15.066461  876341 filesync.go:126] Scanning /home/jenkins/minikube-integration/21512-614854/.minikube/addons for local assets ...
	I0908 12:31:15.066582  876341 filesync.go:126] Scanning /home/jenkins/minikube-integration/21512-614854/.minikube/files for local assets ...
	I0908 12:31:15.066658  876341 filesync.go:149] local asset: /home/jenkins/minikube-integration/21512-614854/.minikube/files/etc/ssl/certs/6186202.pem -> 6186202.pem in /etc/ssl/certs
	I0908 12:31:15.066746  876341 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 12:31:15.077082  876341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/files/etc/ssl/certs/6186202.pem --> /etc/ssl/certs/6186202.pem (1708 bytes)
	I0908 12:31:15.104714  876341 start.go:296] duration metric: took 155.774843ms for postStartSetup
	I0908 12:31:15.105099  876341 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-283124
	I0908 12:31:15.124659  876341 profile.go:143] Saving config to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/calico-283124/config.json ...
	I0908 12:31:15.124928  876341 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 12:31:15.124975  876341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-283124
	I0908 12:31:15.143338  876341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/calico-283124/id_rsa Username:docker}
	I0908 12:31:15.232890  876341 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0908 12:31:15.237594  876341 start.go:128] duration metric: took 8.237251718s to createHost
	I0908 12:31:15.237626  876341 start.go:83] releasing machines lock for "calico-283124", held for 8.237421881s
	I0908 12:31:15.237699  876341 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-283124
	I0908 12:31:15.255869  876341 ssh_runner.go:195] Run: cat /version.json
	I0908 12:31:15.255931  876341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-283124
	I0908 12:31:15.255956  876341 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 12:31:15.256045  876341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-283124
	I0908 12:31:15.275547  876341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/calico-283124/id_rsa Username:docker}
	I0908 12:31:15.275784  876341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/calico-283124/id_rsa Username:docker}
	I0908 12:31:15.440862  876341 ssh_runner.go:195] Run: systemctl --version
	I0908 12:31:15.445901  876341 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0908 12:31:15.591154  876341 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0908 12:31:15.596255  876341 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 12:31:15.618408  876341 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0908 12:31:15.618497  876341 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 12:31:15.648681  876341 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0908 12:31:15.648707  876341 start.go:495] detecting cgroup driver to use...
	I0908 12:31:15.648742  876341 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0908 12:31:15.648800  876341 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 12:31:15.665226  876341 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 12:31:15.676978  876341 docker.go:218] disabling cri-docker service (if available) ...
	I0908 12:31:15.677056  876341 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 12:31:15.690793  876341 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 12:31:15.706246  876341 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 12:31:15.796849  876341 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 12:31:15.888838  876341 docker.go:234] disabling docker service ...
	I0908 12:31:15.888916  876341 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 12:31:15.911840  876341 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 12:31:15.926407  876341 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 12:31:16.011983  876341 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 12:31:16.108343  876341 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 12:31:16.120355  876341 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 12:31:16.139919  876341 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0908 12:31:16.140004  876341 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:31:16.153522  876341 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0908 12:31:16.153610  876341 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:31:16.166926  876341 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:31:16.178789  876341 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:31:16.190343  876341 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 12:31:16.200544  876341 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:31:16.211478  876341 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:31:16.229747  876341 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:31:16.241267  876341 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 12:31:16.250693  876341 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 12:31:16.259916  876341 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:31:16.342109  876341 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0908 12:31:16.435734  876341 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0908 12:31:16.435807  876341 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0908 12:31:16.439946  876341 start.go:563] Will wait 60s for crictl version
	I0908 12:31:16.440049  876341 ssh_runner.go:195] Run: which crictl
	I0908 12:31:16.443860  876341 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 12:31:16.480862  876341 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0908 12:31:16.480956  876341 ssh_runner.go:195] Run: crio --version
	I0908 12:31:16.521987  876341 ssh_runner.go:195] Run: crio --version
	I0908 12:31:16.584774  876341 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0908 12:31:16.586395  876341 cli_runner.go:164] Run: docker network inspect calico-283124 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 12:31:16.617070  876341 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0908 12:31:16.622004  876341 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 12:31:16.635827  876341 kubeadm.go:875] updating cluster {Name:calico-283124 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-283124 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 12:31:16.636018  876341 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 12:31:16.636102  876341 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 12:31:16.728315  876341 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 12:31:16.728345  876341 crio.go:433] Images already preloaded, skipping extraction
	I0908 12:31:16.728397  876341 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 12:31:16.764741  876341 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 12:31:16.764763  876341 cache_images.go:85] Images are preloaded, skipping loading
	I0908 12:31:16.764772  876341 kubeadm.go:926] updating node { 192.168.76.2 8443 v1.34.0 crio true true} ...
	I0908 12:31:16.764871  876341 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=calico-283124 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:calico-283124 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I0908 12:31:16.764942  876341 ssh_runner.go:195] Run: crio config
	I0908 12:31:16.830691  876341 cni.go:84] Creating CNI manager for "calico"
	I0908 12:31:16.830718  876341 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 12:31:16.830746  876341 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-283124 NodeName:calico-283124 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 12:31:16.830865  876341 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-283124"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 12:31:16.830926  876341 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 12:31:16.841960  876341 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 12:31:16.842051  876341 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 12:31:16.851944  876341 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0908 12:31:16.871314  876341 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 12:31:16.903965  876341 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I0908 12:31:16.924199  876341 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0908 12:31:16.928263  876341 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 12:31:16.941176  876341 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:31:17.029242  876341 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 12:31:17.044870  876341 certs.go:68] Setting up /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/calico-283124 for IP: 192.168.76.2
	I0908 12:31:17.044900  876341 certs.go:194] generating shared ca certs ...
	I0908 12:31:17.044922  876341 certs.go:226] acquiring lock for ca certs: {Name:mkfa58237bde04d2b800b1fdade18ddbc226533f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:31:17.045096  876341 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21512-614854/.minikube/ca.key
	I0908 12:31:17.045150  876341 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21512-614854/.minikube/proxy-client-ca.key
	I0908 12:31:17.045161  876341 certs.go:256] generating profile certs ...
	I0908 12:31:17.045218  876341 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/calico-283124/client.key
	I0908 12:31:17.045232  876341 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/calico-283124/client.crt with IP's: []
	I0908 12:31:17.210537  876341 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/calico-283124/client.crt ...
	I0908 12:31:17.210579  876341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/calico-283124/client.crt: {Name:mk22b3c93a0a1ca978197c86f6336038faca37a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:31:17.210797  876341 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/calico-283124/client.key ...
	I0908 12:31:17.210813  876341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/calico-283124/client.key: {Name:mkb6d5a7b7739de47beb9386d0dba1a63d474280 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:31:17.210919  876341 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/calico-283124/apiserver.key.b3bd01d3
	I0908 12:31:17.210942  876341 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/calico-283124/apiserver.crt.b3bd01d3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0908 12:31:17.650381  876341 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/calico-283124/apiserver.crt.b3bd01d3 ...
	I0908 12:31:17.650417  876341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/calico-283124/apiserver.crt.b3bd01d3: {Name:mkd641a7f6fa5080d138fe50a228dc5379af3cd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:31:17.650627  876341 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/calico-283124/apiserver.key.b3bd01d3 ...
	I0908 12:31:17.650646  876341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/calico-283124/apiserver.key.b3bd01d3: {Name:mk3fb757ae12bda1f1b2ea5e282055580487bf0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:31:17.650747  876341 certs.go:381] copying /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/calico-283124/apiserver.crt.b3bd01d3 -> /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/calico-283124/apiserver.crt
	I0908 12:31:17.650854  876341 certs.go:385] copying /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/calico-283124/apiserver.key.b3bd01d3 -> /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/calico-283124/apiserver.key
	I0908 12:31:17.650917  876341 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/calico-283124/proxy-client.key
	I0908 12:31:17.650935  876341 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/calico-283124/proxy-client.crt with IP's: []
	I0908 12:31:17.791086  876341 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/calico-283124/proxy-client.crt ...
	I0908 12:31:17.791117  876341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/calico-283124/proxy-client.crt: {Name:mkc057a1b4e9e1620d0768bc7c84e22918a5a7fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:31:17.791294  876341 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/calico-283124/proxy-client.key ...
	I0908 12:31:17.791310  876341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/calico-283124/proxy-client.key: {Name:mk21bb69caa240bfb992a669d4d822acbcee2082 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:31:17.791587  876341 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/618620.pem (1338 bytes)
	W0908 12:31:17.791644  876341 certs.go:480] ignoring /home/jenkins/minikube-integration/21512-614854/.minikube/certs/618620_empty.pem, impossibly tiny 0 bytes
	I0908 12:31:17.791706  876341 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca-key.pem (1675 bytes)
	I0908 12:31:17.791747  876341 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem (1082 bytes)
	I0908 12:31:17.791802  876341 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/cert.pem (1123 bytes)
	I0908 12:31:17.791845  876341 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/key.pem (1675 bytes)
	I0908 12:31:17.791919  876341 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/files/etc/ssl/certs/6186202.pem (1708 bytes)
	I0908 12:31:17.792711  876341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 12:31:17.823616  876341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0908 12:31:17.850100  876341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 12:31:17.878009  876341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0908 12:31:17.913195  876341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/calico-283124/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0908 12:31:17.942515  876341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/calico-283124/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0908 12:31:17.973128  876341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/calico-283124/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 12:31:18.002925  876341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/calico-283124/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0908 12:31:18.035090  876341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/certs/618620.pem --> /usr/share/ca-certificates/618620.pem (1338 bytes)
	I0908 12:31:18.068368  876341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/files/etc/ssl/certs/6186202.pem --> /usr/share/ca-certificates/6186202.pem (1708 bytes)
	I0908 12:31:18.101168  876341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 12:31:18.130047  876341 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 12:31:18.150409  876341 ssh_runner.go:195] Run: openssl version
	I0908 12:31:18.157302  876341 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/618620.pem && ln -fs /usr/share/ca-certificates/618620.pem /etc/ssl/certs/618620.pem"
	I0908 12:31:18.169637  876341 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/618620.pem
	I0908 12:31:18.174544  876341 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 11:43 /usr/share/ca-certificates/618620.pem
	I0908 12:31:18.174634  876341 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/618620.pem
	I0908 12:31:18.183379  876341 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/618620.pem /etc/ssl/certs/51391683.0"
	I0908 12:31:18.196423  876341 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6186202.pem && ln -fs /usr/share/ca-certificates/6186202.pem /etc/ssl/certs/6186202.pem"
	I0908 12:31:18.209266  876341 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6186202.pem
	I0908 12:31:18.213353  876341 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 11:43 /usr/share/ca-certificates/6186202.pem
	I0908 12:31:18.213437  876341 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6186202.pem
	I0908 12:31:18.221499  876341 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6186202.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 12:31:18.232884  876341 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 12:31:18.244549  876341 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:31:18.248965  876341 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 11:33 /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:31:18.249102  876341 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:31:18.257025  876341 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 12:31:18.268965  876341 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 12:31:18.273122  876341 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0908 12:31:18.273195  876341 kubeadm.go:392] StartCluster: {Name:calico-283124 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-283124 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:31:18.273310  876341 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0908 12:31:18.273377  876341 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 12:31:18.314623  876341 cri.go:89] found id: ""
	I0908 12:31:18.314710  876341 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 12:31:18.324820  876341 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0908 12:31:18.335516  876341 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0908 12:31:18.335584  876341 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0908 12:31:18.347125  876341 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0908 12:31:18.347151  876341 kubeadm.go:157] found existing configuration files:
	
	I0908 12:31:18.347211  876341 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0908 12:31:18.357552  876341 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0908 12:31:18.357633  876341 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0908 12:31:18.367509  876341 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0908 12:31:18.377463  876341 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0908 12:31:18.377532  876341 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0908 12:31:18.387972  876341 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0908 12:31:18.398119  876341 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0908 12:31:18.398218  876341 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0908 12:31:18.407239  876341 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0908 12:31:18.417143  876341 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0908 12:31:18.417209  876341 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0908 12:31:18.427181  876341 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0908 12:31:18.492465  876341 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0908 12:31:18.492823  876341 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
	I0908 12:31:18.570402  876341 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0908 12:31:29.901147  876341 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0908 12:31:29.901232  876341 kubeadm.go:310] [preflight] Running pre-flight checks
	I0908 12:31:29.901352  876341 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0908 12:31:29.901425  876341 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1083-gcp
	I0908 12:31:29.901462  876341 kubeadm.go:310] OS: Linux
	I0908 12:31:29.901522  876341 kubeadm.go:310] CGROUPS_CPU: enabled
	I0908 12:31:29.901567  876341 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0908 12:31:29.901610  876341 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0908 12:31:29.901652  876341 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0908 12:31:29.901695  876341 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0908 12:31:29.901741  876341 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0908 12:31:29.901781  876341 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0908 12:31:29.901823  876341 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0908 12:31:29.901867  876341 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0908 12:31:29.901929  876341 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0908 12:31:29.902016  876341 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0908 12:31:29.902133  876341 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0908 12:31:29.902243  876341 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0908 12:31:29.903860  876341 out.go:252]   - Generating certificates and keys ...
	I0908 12:31:29.903977  876341 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0908 12:31:29.904076  876341 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0908 12:31:29.904170  876341 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0908 12:31:29.904221  876341 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0908 12:31:29.904279  876341 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0908 12:31:29.904339  876341 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0908 12:31:29.904413  876341 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0908 12:31:29.904565  876341 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [calico-283124 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0908 12:31:29.904620  876341 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0908 12:31:29.904720  876341 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [calico-283124 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0908 12:31:29.904779  876341 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0908 12:31:29.904835  876341 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0908 12:31:29.904875  876341 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0908 12:31:29.904928  876341 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0908 12:31:29.904987  876341 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0908 12:31:29.905043  876341 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0908 12:31:29.905129  876341 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0908 12:31:29.905244  876341 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0908 12:31:29.905313  876341 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0908 12:31:29.905435  876341 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0908 12:31:29.905552  876341 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0908 12:31:29.907765  876341 out.go:252]   - Booting up control plane ...
	I0908 12:31:29.907890  876341 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0908 12:31:29.908008  876341 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0908 12:31:29.908120  876341 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0908 12:31:29.908261  876341 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0908 12:31:29.908366  876341 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0908 12:31:29.908470  876341 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0908 12:31:29.908544  876341 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0908 12:31:29.908583  876341 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0908 12:31:29.908699  876341 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0908 12:31:29.908791  876341 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0908 12:31:29.908842  876341 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.000868287s
	I0908 12:31:29.908921  876341 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0908 12:31:29.908993  876341 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I0908 12:31:29.909075  876341 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0908 12:31:29.909143  876341 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0908 12:31:29.909213  876341 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.893336764s
	I0908 12:31:29.909285  876341 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.22277463s
	I0908 12:31:29.909343  876341 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 5.00162345s
	I0908 12:31:29.909436  876341 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0908 12:31:29.909542  876341 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0908 12:31:29.909592  876341 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0908 12:31:29.909761  876341 kubeadm.go:310] [mark-control-plane] Marking the node calico-283124 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0908 12:31:29.909818  876341 kubeadm.go:310] [bootstrap-token] Using token: mdumxr.qbxnw8tyauykr0g0
	I0908 12:31:29.911010  876341 out.go:252]   - Configuring RBAC rules ...
	I0908 12:31:29.911115  876341 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0908 12:31:29.911186  876341 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0908 12:31:29.911307  876341 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0908 12:31:29.911425  876341 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0908 12:31:29.911525  876341 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0908 12:31:29.911629  876341 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0908 12:31:29.911803  876341 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0908 12:31:29.911844  876341 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0908 12:31:29.911886  876341 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0908 12:31:29.911892  876341 kubeadm.go:310] 
	I0908 12:31:29.911949  876341 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0908 12:31:29.911955  876341 kubeadm.go:310] 
	I0908 12:31:29.912024  876341 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0908 12:31:29.912030  876341 kubeadm.go:310] 
	I0908 12:31:29.912062  876341 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0908 12:31:29.912122  876341 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0908 12:31:29.912169  876341 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0908 12:31:29.912175  876341 kubeadm.go:310] 
	I0908 12:31:29.912237  876341 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0908 12:31:29.912244  876341 kubeadm.go:310] 
	I0908 12:31:29.912290  876341 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0908 12:31:29.912296  876341 kubeadm.go:310] 
	I0908 12:31:29.912339  876341 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0908 12:31:29.912404  876341 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0908 12:31:29.912479  876341 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0908 12:31:29.912487  876341 kubeadm.go:310] 
	I0908 12:31:29.912601  876341 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0908 12:31:29.912676  876341 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0908 12:31:29.912686  876341 kubeadm.go:310] 
	I0908 12:31:29.912762  876341 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token mdumxr.qbxnw8tyauykr0g0 \
	I0908 12:31:29.912854  876341 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7637fdf591f5caebb87eda0c466f59e2329c3b4b8f568d0c2721bf3a0b62aaf9 \
	I0908 12:31:29.912874  876341 kubeadm.go:310] 	--control-plane 
	I0908 12:31:29.912879  876341 kubeadm.go:310] 
	I0908 12:31:29.912956  876341 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0908 12:31:29.912963  876341 kubeadm.go:310] 
	I0908 12:31:29.913037  876341 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token mdumxr.qbxnw8tyauykr0g0 \
	I0908 12:31:29.913155  876341 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7637fdf591f5caebb87eda0c466f59e2329c3b4b8f568d0c2721bf3a0b62aaf9 
	I0908 12:31:29.913168  876341 cni.go:84] Creating CNI manager for "calico"
	I0908 12:31:29.914525  876341 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I0908 12:31:29.916235  876341 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0908 12:31:29.916260  876341 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (539470 bytes)
	I0908 12:31:29.936839  876341 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0908 12:31:31.803118  876341 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.866229786s)
	I0908 12:31:31.803189  876341 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0908 12:31:31.803342  876341 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 12:31:31.803367  876341 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-283124 minikube.k8s.io/updated_at=2025_09_08T12_31_31_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=a399eb27affc71ce2737faeeac659fc2ce938c64 minikube.k8s.io/name=calico-283124 minikube.k8s.io/primary=true
	I0908 12:31:31.924386  876341 ops.go:34] apiserver oom_adj: -16
	I0908 12:31:31.924415  876341 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 12:31:32.424862  876341 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 12:31:32.925012  876341 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 12:31:33.424720  876341 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 12:31:33.924840  876341 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 12:31:34.424982  876341 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 12:31:34.924901  876341 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 12:31:35.009558  876341 kubeadm.go:1105] duration metric: took 3.206302042s to wait for elevateKubeSystemPrivileges
	I0908 12:31:35.009599  876341 kubeadm.go:394] duration metric: took 16.736408857s to StartCluster
	I0908 12:31:35.009624  876341 settings.go:142] acquiring lock: {Name:mk9e6c1dbd6be0735fc1b1285e1ee30836d4ceb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:31:35.009706  876341 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21512-614854/kubeconfig
	I0908 12:31:35.011719  876341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/kubeconfig: {Name:mkdc8e576f612d76ffebaac15a9174cc9dea2917 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:31:35.012069  876341 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 12:31:35.012364  876341 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0908 12:31:35.012725  876341 config.go:182] Loaded profile config "calico-283124": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 12:31:35.012789  876341 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0908 12:31:35.012874  876341 addons.go:69] Setting storage-provisioner=true in profile "calico-283124"
	I0908 12:31:35.012898  876341 addons.go:238] Setting addon storage-provisioner=true in "calico-283124"
	I0908 12:31:35.012929  876341 host.go:66] Checking if "calico-283124" exists ...
	I0908 12:31:35.013497  876341 cli_runner.go:164] Run: docker container inspect calico-283124 --format={{.State.Status}}
	I0908 12:31:35.013610  876341 addons.go:69] Setting default-storageclass=true in profile "calico-283124"
	I0908 12:31:35.013645  876341 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-283124"
	I0908 12:31:35.013977  876341 cli_runner.go:164] Run: docker container inspect calico-283124 --format={{.State.Status}}
	I0908 12:31:35.014973  876341 out.go:179] * Verifying Kubernetes components...
	I0908 12:31:35.016583  876341 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:31:35.043698  876341 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 12:31:35.046692  876341 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 12:31:35.046719  876341 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 12:31:35.046797  876341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-283124
	I0908 12:31:35.055910  876341 addons.go:238] Setting addon default-storageclass=true in "calico-283124"
	I0908 12:31:35.055955  876341 host.go:66] Checking if "calico-283124" exists ...
	I0908 12:31:35.056320  876341 cli_runner.go:164] Run: docker container inspect calico-283124 --format={{.State.Status}}
	I0908 12:31:35.066583  876341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/calico-283124/id_rsa Username:docker}
	I0908 12:31:35.075914  876341 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 12:31:35.075944  876341 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 12:31:35.076022  876341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-283124
	I0908 12:31:35.110103  876341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/calico-283124/id_rsa Username:docker}
	I0908 12:31:35.302549  876341 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0908 12:31:35.304789  876341 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 12:31:35.314452  876341 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 12:31:35.386030  876341 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 12:31:36.032018  876341 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0908 12:31:36.254108  876341 node_ready.go:35] waiting up to 15m0s for node "calico-283124" to be "Ready" ...
	I0908 12:31:36.279614  876341 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0908 12:31:36.280966  876341 addons.go:514] duration metric: took 1.268165026s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0908 12:31:36.537107  876341 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-283124" context rescaled to 1 replicas
	W0908 12:31:38.258658  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:31:40.758237  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:31:43.257173  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:31:45.257810  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:31:47.258063  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:31:49.258202  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:31:51.757924  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:31:54.257545  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:31:56.258310  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:31:58.757909  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:32:00.758083  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:32:02.758355  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:32:05.258107  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:32:07.258168  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:32:09.757186  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:32:11.757579  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:32:14.258304  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:32:16.758403  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:32:19.257208  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:32:21.257981  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:32:23.757356  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:32:25.757777  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:32:27.757972  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:32:29.758544  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:32:32.258022  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:32:34.757739  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:32:36.757975  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:32:39.258540  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:32:41.757214  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:32:43.758231  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:32:46.258482  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:32:48.757519  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:32:50.758421  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:32:53.258246  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:32:55.757851  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:32:57.758414  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:33:00.258178  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:33:02.758733  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:33:05.258088  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:33:07.258711  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:33:09.758292  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:33:11.758693  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:33:14.258525  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:33:16.758254  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:33:19.257452  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:33:21.258699  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:33:23.758620  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:33:26.258330  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:33:28.758422  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:33:31.257525  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:33:33.258383  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:33:35.758415  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:33:38.257745  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:33:40.258615  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:33:42.757698  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:33:44.757744  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:33:47.257653  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:33:49.758490  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:33:52.257573  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:33:54.758028  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:33:56.758342  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:33:59.257726  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:34:01.258333  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:34:03.757379  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:34:05.757604  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:34:07.758145  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:34:10.257635  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:34:12.757288  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:34:14.757407  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:34:16.758015  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:34:18.758229  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:34:21.257967  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:34:23.757383  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:34:25.757717  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:34:27.758682  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:34:30.257636  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:34:32.257754  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:34:34.758462  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:34:37.258157  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:34:39.757936  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:34:42.258721  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:34:44.757809  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:34:46.758086  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:34:49.257280  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:34:51.259046  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:34:53.759049  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:34:56.257529  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:34:58.258454  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:35:00.758178  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:35:02.758375  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:35:05.257766  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:35:07.757691  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:35:09.758265  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:35:12.257410  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:35:14.257551  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:35:16.258108  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:35:18.757322  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:35:21.258262  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:35:23.258443  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:35:25.258538  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:35:27.756892  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:35:29.757972  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:35:32.257204  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:35:34.257906  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:35:36.758025  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:35:39.258118  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:35:41.757657  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:35:44.257761  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:35:46.758287  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:35:49.257980  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:35:51.258346  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:35:53.758276  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:35:56.258327  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:35:58.758233  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:36:01.258044  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:36:03.757820  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:36:06.258164  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:36:08.757839  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:36:11.257634  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:36:13.257906  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:36:15.757282  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:36:17.757781  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:36:20.257448  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:36:22.258245  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:36:24.757762  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:36:27.257127  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:36:29.257472  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:36:31.757663  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:36:33.758237  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:36:36.257381  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:36:38.258080  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:36:40.258216  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:36:42.758116  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:36:45.258155  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:36:47.258256  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:36:49.757606  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:36:51.758239  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:36:53.758326  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:36:56.257789  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:36:58.258533  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:00.758093  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:03.257787  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:05.757720  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:08.257784  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:10.757968  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:13.258116  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:15.757477  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:17.757872  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:19.757985  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:22.258365  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:24.757273  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:27.257759  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:29.758603  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:32.257319  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:34.258404  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:36.757652  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:38.758275  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:41.257525  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:43.757901  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:45.758150  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:48.257273  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:50.257639  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:52.757594  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:54.758061  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:56.758378  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:59.258066  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:01.757513  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:03.758132  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:06.258116  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:08.757359  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:11.257772  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:13.258266  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:15.758076  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:18.258221  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:20.757456  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:22.757615  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:24.757944  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:27.257481  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:29.257676  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:31.757922  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:33.757998  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:35.758189  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:38.257284  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:40.258186  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:42.757563  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:44.758049  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:47.258155  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:49.758499  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:52.257549  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:54.257641  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:56.257796  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:58.758359  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:01.257901  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:03.757752  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:06.257817  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:08.258296  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:10.757713  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:13.258258  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:15.757976  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:18.257584  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:20.257682  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:22.758060  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:25.257404  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:27.257971  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:29.757975  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:32.257556  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:34.257819  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:36.757633  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:38.757871  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:41.257638  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:43.257970  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:45.757733  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:47.758232  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:49.758583  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:52.257803  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:54.257902  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:56.758212  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:59.257321  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:01.257592  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:03.757620  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:06.257707  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:08.757824  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:11.257105  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:13.257921  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:15.258039  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:17.758096  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:20.258070  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:22.757269  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:24.757608  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:27.257916  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:29.758141  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:32.257932  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:34.758358  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:37.257458  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:39.257731  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:41.758247  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:44.257810  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:46.258011  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:48.758378  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:51.257347  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:53.757974  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:56.258386  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:58.757745  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:00.758360  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:03.257917  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:05.758305  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:08.257694  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:10.757411  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:12.757802  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:15.258051  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:17.258437  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:19.758059  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:22.257165  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:24.257861  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:26.758229  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:29.257287  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:31.257940  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:33.757609  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:36.257193  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:38.257338  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:40.259086  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:42.757325  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:44.757506  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:46.757651  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:48.758048  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:51.257798  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:53.757260  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:55.758043  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:58.257673  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:00.757447  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:03.258213  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:05.758038  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:08.257935  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:10.757253  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:12.757315  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:14.758076  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:17.257358  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:19.257904  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:21.757955  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:23.758139  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:26.258024  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:28.758804  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:31.257119  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:33.257824  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:35.257908  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:37.757486  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:39.757547  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:41.757854  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:44.258038  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:46.258403  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:48.758374  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:51.257068  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:53.258291  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:55.757944  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:58.258146  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:00.258571  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:02.758004  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:05.257358  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:07.257469  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:09.258160  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:11.758090  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:14.257557  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:16.257748  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:18.258516  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:20.757930  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:23.257512  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:25.258146  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:27.757352  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:29.757963  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:32.257634  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:34.258269  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:36.758040  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:39.257138  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:41.257975  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:43.757450  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:45.758009  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:48.258119  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:50.757728  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:52.758476  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:55.258004  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:57.758245  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:00.257652  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:02.257901  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:04.758074  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:06.758528  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:09.257856  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:11.757537  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:13.758186  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:16.257671  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:18.257951  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:20.757717  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:22.758111  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:25.258291  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:27.758147  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:30.257141  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:32.257828  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:34.258099  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:36.757903  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:39.257811  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:41.757835  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:43.757896  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:46.257631  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:48.757919  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:51.258326  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:53.757689  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:56.257769  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:58.758237  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:01.257880  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:03.258125  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:05.758255  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:08.257563  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:10.758121  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:12.758503  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:15.257621  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:17.258405  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:19.758075  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:22.257402  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:24.258425  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:26.757955  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:28.758271  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:31.257461  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:33.258189  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:35.757165  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:37.757806  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:40.257927  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:42.258011  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:44.258220  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:46.758295  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:49.258312  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:51.758196  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:54.257814  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:56.758797  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:59.258173  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:46:01.757360  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:46:03.758217  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:46:06.257756  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:46:08.757562  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:46:11.258350  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:46:13.757334  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:46:15.757830  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:46:18.258351  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:46:20.758305  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:46:23.258066  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:46:25.758112  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:46:28.258058  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:46:30.758007  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:46:33.257298  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:46:35.257528  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	I0908 12:46:36.255177  876341 node_ready.go:38] duration metric: took 15m0.000977916s for node "calico-283124" to be "Ready" ...
	I0908 12:46:36.257360  876341 out.go:203] 
	W0908 12:46:36.258826  876341 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W0908 12:46:36.258848  876341 out.go:285] * 
	* 
	W0908 12:46:36.261468  876341 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0908 12:46:36.262842  876341 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (929.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-7dbrb" [4704cc58-09c7-49d5-a649-d7e9fd6c1297] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-896003 -n old-k8s-version-896003
start_stop_delete_test.go:272: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-09-08 12:45:38.713514045 +0000 UTC m=+4358.006256520
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context old-k8s-version-896003 describe po kubernetes-dashboard-8694d4445c-7dbrb -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context old-k8s-version-896003 describe po kubernetes-dashboard-8694d4445c-7dbrb -n kubernetes-dashboard:
Name:             kubernetes-dashboard-8694d4445c-7dbrb
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             old-k8s-version-896003/192.168.85.2
Start Time:       Mon, 08 Sep 2025 12:36:15 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=8694d4445c
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/kubernetes-dashboard-8694d4445c
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zpgmq (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-zpgmq:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  9m22s                  default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7dbrb to old-k8s-version-896003
Normal   Pulling    6m25s (x4 over 9m22s)  kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     5m55s (x4 over 8m52s)  kubelet            Error: ErrImagePull
Warning  Failed     5m42s (x6 over 8m51s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    5m28s (x7 over 8m51s)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     3m55s (x5 over 8m52s)  kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context old-k8s-version-896003 logs kubernetes-dashboard-8694d4445c-7dbrb -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context old-k8s-version-896003 logs kubernetes-dashboard-8694d4445c-7dbrb -n kubernetes-dashboard: exit status 1 (81.313291ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-8694d4445c-7dbrb" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context old-k8s-version-896003 logs kubernetes-dashboard-8694d4445c-7dbrb -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-896003
helpers_test.go:243: (dbg) docker inspect old-k8s-version-896003:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b5a486cde8d67d69c8d4f800ed04c6d76b8c0508bba7be903886e32ef34bc8c7",
	        "Created": "2025-09-08T12:34:41.454905879Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 936820,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-08T12:35:51.787540238Z",
	            "FinishedAt": "2025-09-08T12:35:50.960911601Z"
	        },
	        "Image": "sha256:863fa02c4a7dcd4571b30c16c1e6ae3eaa1d1a904931aac9472133ae3328c098",
	        "ResolvConfPath": "/var/lib/docker/containers/b5a486cde8d67d69c8d4f800ed04c6d76b8c0508bba7be903886e32ef34bc8c7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b5a486cde8d67d69c8d4f800ed04c6d76b8c0508bba7be903886e32ef34bc8c7/hostname",
	        "HostsPath": "/var/lib/docker/containers/b5a486cde8d67d69c8d4f800ed04c6d76b8c0508bba7be903886e32ef34bc8c7/hosts",
	        "LogPath": "/var/lib/docker/containers/b5a486cde8d67d69c8d4f800ed04c6d76b8c0508bba7be903886e32ef34bc8c7/b5a486cde8d67d69c8d4f800ed04c6d76b8c0508bba7be903886e32ef34bc8c7-json.log",
	        "Name": "/old-k8s-version-896003",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-896003:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-896003",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b5a486cde8d67d69c8d4f800ed04c6d76b8c0508bba7be903886e32ef34bc8c7",
	                "LowerDir": "/var/lib/docker/overlay2/55405109ee2707194600695162bcd95b400c46c234120d59f9858c0ddffc7f9a-init/diff:/var/lib/docker/overlay2/e9bf8e09f770f60ed4c810e0202629dccf6a4c304822cce5a8dffd900ae12eff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/55405109ee2707194600695162bcd95b400c46c234120d59f9858c0ddffc7f9a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/55405109ee2707194600695162bcd95b400c46c234120d59f9858c0ddffc7f9a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/55405109ee2707194600695162bcd95b400c46c234120d59f9858c0ddffc7f9a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-896003",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-896003/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-896003",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-896003",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-896003",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b8ff3678afbcaaeea1c8d7fab6e289df9defe62e616e94ced89ff3f87425dfe2",
	            "SandboxKey": "/var/run/docker/netns/b8ff3678afbc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33473"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33474"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33477"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33475"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33476"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-896003": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:e3:f0:e5:43:e4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "64680b14c5747b6ba7a6a2cd81d8d5f27c97be0f750b792a24a5e67bd1710746",
	                    "EndpointID": "f7da3850caf4bf0d9e10d9ddee68a0c968d16a3f27216496844aa00a2a9cfe82",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-896003",
	                        "b5a486cde8d6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-896003 -n old-k8s-version-896003
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-896003 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-896003 logs -n 25: (1.244836919s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-283124 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ bridge-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:34 UTC │ 08 Sep 25 12:34 UTC │
	│ ssh     │ -p bridge-283124 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:34 UTC │ 08 Sep 25 12:34 UTC │
	│ ssh     │ -p bridge-283124 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:34 UTC │                     │
	│ ssh     │ -p bridge-283124 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:34 UTC │ 08 Sep 25 12:34 UTC │
	│ ssh     │ -p bridge-283124 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:34 UTC │ 08 Sep 25 12:35 UTC │
	│ ssh     │ -p bridge-283124 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:35 UTC │
	│ ssh     │ -p bridge-283124 sudo containerd config dump                                                                                                                                                                                                  │ bridge-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:35 UTC │
	│ ssh     │ -p bridge-283124 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:35 UTC │
	│ ssh     │ -p bridge-283124 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:35 UTC │
	│ ssh     │ -p bridge-283124 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:35 UTC │
	│ ssh     │ -p bridge-283124 sudo crio config                                                                                                                                                                                                             │ bridge-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:35 UTC │
	│ delete  │ -p bridge-283124                                                                                                                                                                                                                              │ bridge-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:35 UTC │
	│ start   │ -p default-k8s-diff-port-039958 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-039958 │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:36 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-896003 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-896003       │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:35 UTC │
	│ stop    │ -p old-k8s-version-896003 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-896003       │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:35 UTC │
	│ addons  │ enable metrics-server -p no-preload-997730 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-997730            │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:35 UTC │
	│ stop    │ -p no-preload-997730 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-997730            │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:36 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-896003 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-896003       │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:35 UTC │
	│ start   │ -p old-k8s-version-896003 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-896003       │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:36 UTC │
	│ addons  │ enable dashboard -p no-preload-997730 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-997730            │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
	│ start   │ -p no-preload-997730 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                       │ no-preload-997730            │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-039958 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-039958 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
	│ stop    │ -p default-k8s-diff-port-039958 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-039958 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-039958 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-039958 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
	│ start   │ -p default-k8s-diff-port-039958 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-039958 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:37 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 12:36:46
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 12:36:46.576701  942848 out.go:360] Setting OutFile to fd 1 ...
	I0908 12:36:46.576859  942848 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:36:46.576870  942848 out.go:374] Setting ErrFile to fd 2...
	I0908 12:36:46.576877  942848 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:36:46.577119  942848 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-614854/.minikube/bin
	I0908 12:36:46.577715  942848 out.go:368] Setting JSON to false
	I0908 12:36:46.579062  942848 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":11951,"bootTime":1757323056,"procs":302,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 12:36:46.579193  942848 start.go:140] virtualization: kvm guest
	I0908 12:36:46.581327  942848 out.go:179] * [default-k8s-diff-port-039958] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 12:36:46.582661  942848 out.go:179]   - MINIKUBE_LOCATION=21512
	I0908 12:36:46.582698  942848 notify.go:220] Checking for updates...
	I0908 12:36:46.584965  942848 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 12:36:46.586098  942848 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21512-614854/kubeconfig
	I0908 12:36:46.587326  942848 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-614854/.minikube
	I0908 12:36:46.588738  942848 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 12:36:46.590003  942848 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 12:36:46.591594  942848 config.go:182] Loaded profile config "default-k8s-diff-port-039958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 12:36:46.592226  942848 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 12:36:46.618634  942848 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 12:36:46.618773  942848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:36:46.679298  942848 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-09-08 12:36:46.668942756 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 12:36:46.679407  942848 docker.go:318] overlay module found
	I0908 12:36:46.681016  942848 out.go:179] * Using the docker driver based on existing profile
	I0908 12:36:46.682334  942848 start.go:304] selected driver: docker
	I0908 12:36:46.682353  942848 start.go:918] validating driver "docker" against &{Name:default-k8s-diff-port-039958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-039958 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountS
tring: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:36:46.682476  942848 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 12:36:46.683426  942848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:36:46.745282  942848 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-09-08 12:36:46.73243227 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 12:36:46.745663  942848 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 12:36:46.745700  942848 cni.go:84] Creating CNI manager for ""
	I0908 12:36:46.745763  942848 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 12:36:46.745814  942848 start.go:348] cluster config:
	{Name:default-k8s-diff-port-039958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-039958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:36:46.747972  942848 out.go:179] * Starting "default-k8s-diff-port-039958" primary control-plane node in "default-k8s-diff-port-039958" cluster
	I0908 12:36:46.749230  942848 cache.go:123] Beginning downloading kic base image for docker with crio
	I0908 12:36:46.750628  942848 out.go:179] * Pulling base image v0.0.47-1756980985-21488 ...
	I0908 12:36:46.751931  942848 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 12:36:46.751992  942848 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 12:36:46.752002  942848 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21512-614854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0908 12:36:46.752111  942848 cache.go:58] Caching tarball of preloaded images
	I0908 12:36:46.752219  942848 preload.go:172] Found /home/jenkins/minikube-integration/21512-614854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0908 12:36:46.752258  942848 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0908 12:36:46.752419  942848 profile.go:143] Saving config to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/default-k8s-diff-port-039958/config.json ...
	I0908 12:36:46.780591  942848 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon, skipping pull
	I0908 12:36:46.780624  942848 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in daemon, skipping load
	I0908 12:36:46.780647  942848 cache.go:232] Successfully downloaded all kic artifacts
	I0908 12:36:46.780682  942848 start.go:360] acquireMachinesLock for default-k8s-diff-port-039958: {Name:mk74fa9073ebc792abfeccea0efe5ebf172e66a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 12:36:46.780761  942848 start.go:364] duration metric: took 51.375µs to acquireMachinesLock for "default-k8s-diff-port-039958"
	I0908 12:36:46.780788  942848 start.go:96] Skipping create...Using existing machine configuration
	I0908 12:36:46.780799  942848 fix.go:54] fixHost starting: 
	I0908 12:36:46.781129  942848 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-039958 --format={{.State.Status}}
	I0908 12:36:46.803941  942848 fix.go:112] recreateIfNeeded on default-k8s-diff-port-039958: state=Stopped err=<nil>
	W0908 12:36:46.803983  942848 fix.go:138] unexpected machine state, will restart: <nil>
	W0908 12:36:42.758116  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:36:45.258155  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:36:42.045527  938712 pod_ready.go:104] pod "coredns-66bc5c9577-nd9km" is not "Ready", error: <nil>
	W0908 12:36:44.545681  938712 pod_ready.go:104] pod "coredns-66bc5c9577-nd9km" is not "Ready", error: <nil>
	W0908 12:36:46.546066  938712 pod_ready.go:104] pod "coredns-66bc5c9577-nd9km" is not "Ready", error: <nil>
	I0908 12:36:46.806070  942848 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-039958" ...
	I0908 12:36:46.806212  942848 cli_runner.go:164] Run: docker start default-k8s-diff-port-039958
	I0908 12:36:47.111853  942848 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-039958 --format={{.State.Status}}
	I0908 12:36:47.137411  942848 kic.go:430] container "default-k8s-diff-port-039958" state is running.
	I0908 12:36:47.137907  942848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-039958
	I0908 12:36:47.162432  942848 profile.go:143] Saving config to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/default-k8s-diff-port-039958/config.json ...
	I0908 12:36:47.162670  942848 machine.go:93] provisionDockerMachine start ...
	I0908 12:36:47.162747  942848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-039958
	I0908 12:36:47.185220  942848 main.go:141] libmachine: Using SSH client type: native
	I0908 12:36:47.185582  942848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I0908 12:36:47.185597  942848 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 12:36:47.186433  942848 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51208->127.0.0.1:33483: read: connection reset by peer
	I0908 12:36:50.319771  942848 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-039958
	
	I0908 12:36:50.319812  942848 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-039958"
	I0908 12:36:50.319874  942848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-039958
	I0908 12:36:50.341500  942848 main.go:141] libmachine: Using SSH client type: native
	I0908 12:36:50.341753  942848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I0908 12:36:50.341765  942848 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-039958 && echo "default-k8s-diff-port-039958" | sudo tee /etc/hostname
	I0908 12:36:50.492659  942848 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-039958
	
	I0908 12:36:50.492756  942848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-039958
	I0908 12:36:50.516857  942848 main.go:141] libmachine: Using SSH client type: native
	I0908 12:36:50.517256  942848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I0908 12:36:50.517301  942848 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-039958' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-039958/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-039958' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 12:36:50.644286  942848 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 12:36:50.644321  942848 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21512-614854/.minikube CaCertPath:/home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21512-614854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21512-614854/.minikube}
	I0908 12:36:50.644344  942848 ubuntu.go:190] setting up certificates
	I0908 12:36:50.644356  942848 provision.go:84] configureAuth start
	I0908 12:36:50.644424  942848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-039958
	I0908 12:36:50.662342  942848 provision.go:143] copyHostCerts
	I0908 12:36:50.662414  942848 exec_runner.go:144] found /home/jenkins/minikube-integration/21512-614854/.minikube/cert.pem, removing ...
	I0908 12:36:50.662431  942848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21512-614854/.minikube/cert.pem
	I0908 12:36:50.662496  942848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21512-614854/.minikube/cert.pem (1123 bytes)
	I0908 12:36:50.662596  942848 exec_runner.go:144] found /home/jenkins/minikube-integration/21512-614854/.minikube/key.pem, removing ...
	I0908 12:36:50.662605  942848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21512-614854/.minikube/key.pem
	I0908 12:36:50.662630  942848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21512-614854/.minikube/key.pem (1675 bytes)
	I0908 12:36:50.662714  942848 exec_runner.go:144] found /home/jenkins/minikube-integration/21512-614854/.minikube/ca.pem, removing ...
	I0908 12:36:50.662722  942848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21512-614854/.minikube/ca.pem
	I0908 12:36:50.662742  942848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21512-614854/.minikube/ca.pem (1082 bytes)
	I0908 12:36:50.662805  942848 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21512-614854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-039958 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-039958 localhost minikube]
	I0908 12:36:50.862531  942848 provision.go:177] copyRemoteCerts
	I0908 12:36:50.862604  942848 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 12:36:50.862646  942848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-039958
	I0908 12:36:50.885478  942848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/default-k8s-diff-port-039958/id_rsa Username:docker}
	I0908 12:36:50.986239  942848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 12:36:51.016291  942848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0908 12:36:51.045268  942848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0908 12:36:51.069675  942848 provision.go:87] duration metric: took 425.304221ms to configureAuth
	I0908 12:36:51.069704  942848 ubuntu.go:206] setting minikube options for container-runtime
	I0908 12:36:51.069902  942848 config.go:182] Loaded profile config "default-k8s-diff-port-039958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 12:36:51.070014  942848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-039958
	I0908 12:36:51.094609  942848 main.go:141] libmachine: Using SSH client type: native
	I0908 12:36:51.094825  942848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I0908 12:36:51.094845  942848 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0908 12:36:51.430315  942848 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0908 12:36:51.430362  942848 machine.go:96] duration metric: took 4.267670025s to provisionDockerMachine
	I0908 12:36:51.430380  942848 start.go:293] postStartSetup for "default-k8s-diff-port-039958" (driver="docker")
	I0908 12:36:51.430395  942848 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 12:36:51.430518  942848 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 12:36:51.430587  942848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-039958
	I0908 12:36:51.451170  942848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/default-k8s-diff-port-039958/id_rsa Username:docker}
	I0908 12:36:51.543737  942848 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 12:36:51.548216  942848 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0908 12:36:51.548260  942848 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0908 12:36:51.548273  942848 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0908 12:36:51.548282  942848 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0908 12:36:51.548296  942848 filesync.go:126] Scanning /home/jenkins/minikube-integration/21512-614854/.minikube/addons for local assets ...
	I0908 12:36:51.548366  942848 filesync.go:126] Scanning /home/jenkins/minikube-integration/21512-614854/.minikube/files for local assets ...
	I0908 12:36:51.548469  942848 filesync.go:149] local asset: /home/jenkins/minikube-integration/21512-614854/.minikube/files/etc/ssl/certs/6186202.pem -> 6186202.pem in /etc/ssl/certs
	I0908 12:36:51.548587  942848 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 12:36:51.558329  942848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/files/etc/ssl/certs/6186202.pem --> /etc/ssl/certs/6186202.pem (1708 bytes)
	W0908 12:36:47.258256  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:36:49.757606  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:36:51.758239  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:36:49.046133  938712 pod_ready.go:104] pod "coredns-66bc5c9577-nd9km" is not "Ready", error: <nil>
	W0908 12:36:51.081394  938712 pod_ready.go:104] pod "coredns-66bc5c9577-nd9km" is not "Ready", error: <nil>
	I0908 12:36:51.586425  942848 start.go:296] duration metric: took 156.023527ms for postStartSetup
	I0908 12:36:51.586525  942848 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 12:36:51.586571  942848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-039958
	I0908 12:36:51.613258  942848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/default-k8s-diff-port-039958/id_rsa Username:docker}
	I0908 12:36:51.705344  942848 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0908 12:36:51.711811  942848 fix.go:56] duration metric: took 4.931000802s for fixHost
	I0908 12:36:51.711849  942848 start.go:83] releasing machines lock for "default-k8s-diff-port-039958", held for 4.931072765s
	I0908 12:36:51.711931  942848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-039958
	I0908 12:36:51.734101  942848 ssh_runner.go:195] Run: cat /version.json
	I0908 12:36:51.734183  942848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-039958
	I0908 12:36:51.734267  942848 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 12:36:51.734367  942848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-039958
	I0908 12:36:51.754850  942848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/default-k8s-diff-port-039958/id_rsa Username:docker}
	I0908 12:36:51.755853  942848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/default-k8s-diff-port-039958/id_rsa Username:docker}
	I0908 12:36:51.923783  942848 ssh_runner.go:195] Run: systemctl --version
	I0908 12:36:51.929275  942848 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0908 12:36:52.084547  942848 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0908 12:36:52.090132  942848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 12:36:52.101273  942848 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0908 12:36:52.101378  942848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 12:36:52.111707  942848 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0908 12:36:52.111743  942848 start.go:495] detecting cgroup driver to use...
	I0908 12:36:52.111782  942848 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0908 12:36:52.111825  942848 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 12:36:52.126947  942848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 12:36:52.140290  942848 docker.go:218] disabling cri-docker service (if available) ...
	I0908 12:36:52.140371  942848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 12:36:52.154876  942848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 12:36:52.168633  942848 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 12:36:52.273095  942848 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 12:36:52.357717  942848 docker.go:234] disabling docker service ...
	I0908 12:36:52.357806  942848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 12:36:52.372526  942848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 12:36:52.385814  942848 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 12:36:52.476450  942848 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 12:36:52.566747  942848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 12:36:52.581723  942848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 12:36:52.605430  942848 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0908 12:36:52.605564  942848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:36:52.619096  942848 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0908 12:36:52.619198  942848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:36:52.632585  942848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:36:52.646076  942848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:36:52.658574  942848 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 12:36:52.668753  942848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:36:52.680494  942848 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:36:52.693152  942848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:36:52.705737  942848 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 12:36:52.715688  942848 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 12:36:52.725850  942848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:36:52.815349  942848 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0908 12:36:53.835349  942848 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.019888602s)
	I0908 12:36:53.835376  942848 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0908 12:36:53.835423  942848 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0908 12:36:53.839640  942848 start.go:563] Will wait 60s for crictl version
	I0908 12:36:53.839788  942848 ssh_runner.go:195] Run: which crictl
	I0908 12:36:53.844312  942848 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 12:36:53.880145  942848 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0908 12:36:53.880265  942848 ssh_runner.go:195] Run: crio --version
	I0908 12:36:53.927894  942848 ssh_runner.go:195] Run: crio --version
	I0908 12:36:53.977239  942848 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0908 12:36:52.577096  938712 pod_ready.go:94] pod "coredns-66bc5c9577-nd9km" is "Ready"
	I0908 12:36:52.577136  938712 pod_ready.go:86] duration metric: took 36.037680544s for pod "coredns-66bc5c9577-nd9km" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:36:52.582158  938712 pod_ready.go:83] waiting for pod "etcd-no-preload-997730" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:36:52.588939  938712 pod_ready.go:94] pod "etcd-no-preload-997730" is "Ready"
	I0908 12:36:52.588976  938712 pod_ready.go:86] duration metric: took 6.784149ms for pod "etcd-no-preload-997730" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:36:52.591480  938712 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-997730" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:36:52.598103  938712 pod_ready.go:94] pod "kube-apiserver-no-preload-997730" is "Ready"
	I0908 12:36:52.598137  938712 pod_ready.go:86] duration metric: took 6.627132ms for pod "kube-apiserver-no-preload-997730" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:36:52.601886  938712 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-997730" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:36:52.743487  938712 pod_ready.go:94] pod "kube-controller-manager-no-preload-997730" is "Ready"
	I0908 12:36:52.743515  938712 pod_ready.go:86] duration metric: took 141.597757ms for pod "kube-controller-manager-no-preload-997730" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:36:52.944115  938712 pod_ready.go:83] waiting for pod "kube-proxy-wqscj" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:36:53.342976  938712 pod_ready.go:94] pod "kube-proxy-wqscj" is "Ready"
	I0908 12:36:53.343007  938712 pod_ready.go:86] duration metric: took 398.863544ms for pod "kube-proxy-wqscj" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:36:53.543367  938712 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-997730" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:36:53.943688  938712 pod_ready.go:94] pod "kube-scheduler-no-preload-997730" is "Ready"
	I0908 12:36:53.943731  938712 pod_ready.go:86] duration metric: took 400.331351ms for pod "kube-scheduler-no-preload-997730" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:36:53.943745  938712 pod_ready.go:40] duration metric: took 37.408844643s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 12:36:54.001636  938712 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0908 12:36:54.003368  938712 out.go:179] * Done! kubectl is now configured to use "no-preload-997730" cluster and "default" namespace by default
	I0908 12:36:53.980801  942848 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-039958 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 12:36:54.005208  942848 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0908 12:36:54.009589  942848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 12:36:54.022563  942848 kubeadm.go:875] updating cluster {Name:default-k8s-diff-port-039958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-039958 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 12:36:54.022720  942848 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 12:36:54.022776  942848 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 12:36:54.076190  942848 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 12:36:54.076225  942848 crio.go:433] Images already preloaded, skipping extraction
	I0908 12:36:54.076295  942848 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 12:36:54.118904  942848 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 12:36:54.118932  942848 cache_images.go:85] Images are preloaded, skipping loading
	I0908 12:36:54.118943  942848 kubeadm.go:926] updating node { 192.168.103.2 8444 v1.34.0 crio true true} ...
	I0908 12:36:54.119083  942848 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-039958 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-039958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 12:36:54.119170  942848 ssh_runner.go:195] Run: crio config
	I0908 12:36:54.171743  942848 cni.go:84] Creating CNI manager for ""
	I0908 12:36:54.171768  942848 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 12:36:54.171782  942848 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 12:36:54.171813  942848 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-039958 NodeName:default-k8s-diff-port-039958 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 12:36:54.171991  942848 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-039958"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 12:36:54.172070  942848 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 12:36:54.182142  942848 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 12:36:54.182220  942848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 12:36:54.192725  942848 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0908 12:36:54.214079  942848 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 12:36:54.234494  942848 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I0908 12:36:54.255523  942848 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0908 12:36:54.260549  942848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 12:36:54.274598  942848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:36:54.363767  942848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 12:36:54.380309  942848 certs.go:68] Setting up /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/default-k8s-diff-port-039958 for IP: 192.168.103.2
	I0908 12:36:54.380327  942848 certs.go:194] generating shared ca certs ...
	I0908 12:36:54.380345  942848 certs.go:226] acquiring lock for ca certs: {Name:mkfa58237bde04d2b800b1fdade18ddbc226533f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:36:54.380497  942848 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21512-614854/.minikube/ca.key
	I0908 12:36:54.380536  942848 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21512-614854/.minikube/proxy-client-ca.key
	I0908 12:36:54.380543  942848 certs.go:256] generating profile certs ...
	I0908 12:36:54.380626  942848 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/default-k8s-diff-port-039958/client.key
	I0908 12:36:54.380670  942848 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/default-k8s-diff-port-039958/apiserver.key.b7da9e12
	I0908 12:36:54.380700  942848 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/default-k8s-diff-port-039958/proxy-client.key
	I0908 12:36:54.380808  942848 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/618620.pem (1338 bytes)
	W0908 12:36:54.380832  942848 certs.go:480] ignoring /home/jenkins/minikube-integration/21512-614854/.minikube/certs/618620_empty.pem, impossibly tiny 0 bytes
	I0908 12:36:54.380839  942848 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca-key.pem (1675 bytes)
	I0908 12:36:54.380860  942848 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem (1082 bytes)
	I0908 12:36:54.380878  942848 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/cert.pem (1123 bytes)
	I0908 12:36:54.380900  942848 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/key.pem (1675 bytes)
	I0908 12:36:54.380952  942848 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/files/etc/ssl/certs/6186202.pem (1708 bytes)
	I0908 12:36:54.381854  942848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 12:36:54.413826  942848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0908 12:36:54.444441  942848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 12:36:54.499191  942848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0908 12:36:54.595250  942848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/default-k8s-diff-port-039958/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0908 12:36:54.624909  942848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/default-k8s-diff-port-039958/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0908 12:36:54.652144  942848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/default-k8s-diff-port-039958/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 12:36:54.679150  942848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/default-k8s-diff-port-039958/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0908 12:36:54.706419  942848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/certs/618620.pem --> /usr/share/ca-certificates/618620.pem (1338 bytes)
	I0908 12:36:54.733331  942848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/files/etc/ssl/certs/6186202.pem --> /usr/share/ca-certificates/6186202.pem (1708 bytes)
	I0908 12:36:54.759761  942848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 12:36:54.786705  942848 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 12:36:54.808171  942848 ssh_runner.go:195] Run: openssl version
	I0908 12:36:54.814430  942848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 12:36:54.826103  942848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:36:54.830371  942848 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 11:33 /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:36:54.830445  942848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:36:54.838010  942848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 12:36:54.848205  942848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/618620.pem && ln -fs /usr/share/ca-certificates/618620.pem /etc/ssl/certs/618620.pem"
	I0908 12:36:54.859075  942848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/618620.pem
	I0908 12:36:54.863257  942848 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 11:43 /usr/share/ca-certificates/618620.pem
	I0908 12:36:54.863336  942848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/618620.pem
	I0908 12:36:54.871793  942848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/618620.pem /etc/ssl/certs/51391683.0"
	I0908 12:36:54.882122  942848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6186202.pem && ln -fs /usr/share/ca-certificates/6186202.pem /etc/ssl/certs/6186202.pem"
	I0908 12:36:54.894077  942848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6186202.pem
	I0908 12:36:54.898061  942848 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 11:43 /usr/share/ca-certificates/6186202.pem
	I0908 12:36:54.898134  942848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6186202.pem
	I0908 12:36:54.907305  942848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6186202.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 12:36:54.919955  942848 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 12:36:54.924868  942848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0908 12:36:54.932535  942848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0908 12:36:54.940947  942848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0908 12:36:54.949980  942848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0908 12:36:54.958562  942848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0908 12:36:54.967065  942848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0908 12:36:54.980901  942848 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-039958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-039958 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:36:54.981020  942848 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0908 12:36:54.981071  942848 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 12:36:55.092299  942848 cri.go:89] found id: ""
	I0908 12:36:55.092362  942848 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 12:36:55.105002  942848 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0908 12:36:55.105028  942848 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0908 12:36:55.105086  942848 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0908 12:36:55.180113  942848 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0908 12:36:55.181205  942848 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-039958" does not appear in /home/jenkins/minikube-integration/21512-614854/kubeconfig
	I0908 12:36:55.181925  942848 kubeconfig.go:62] /home/jenkins/minikube-integration/21512-614854/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-039958" cluster setting kubeconfig missing "default-k8s-diff-port-039958" context setting]
	I0908 12:36:55.182972  942848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/kubeconfig: {Name:mkdc8e576f612d76ffebaac15a9174cc9dea2917 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:36:55.184794  942848 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0908 12:36:55.203380  942848 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.103.2
	I0908 12:36:55.203435  942848 kubeadm.go:593] duration metric: took 98.400373ms to restartPrimaryControlPlane
	I0908 12:36:55.203451  942848 kubeadm.go:394] duration metric: took 222.56119ms to StartCluster
	I0908 12:36:55.203480  942848 settings.go:142] acquiring lock: {Name:mk9e6c1dbd6be0735fc1b1285e1ee30836d4ceb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:36:55.203583  942848 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21512-614854/kubeconfig
	I0908 12:36:55.205699  942848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/kubeconfig: {Name:mkdc8e576f612d76ffebaac15a9174cc9dea2917 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:36:55.206063  942848 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 12:36:55.206341  942848 config.go:182] Loaded profile config "default-k8s-diff-port-039958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 12:36:55.206406  942848 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0908 12:36:55.206498  942848 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-039958"
	I0908 12:36:55.206517  942848 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-039958"
	W0908 12:36:55.206526  942848 addons.go:247] addon storage-provisioner should already be in state true
	I0908 12:36:55.206558  942848 host.go:66] Checking if "default-k8s-diff-port-039958" exists ...
	I0908 12:36:55.207111  942848 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-039958 --format={{.State.Status}}
	I0908 12:36:55.207198  942848 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-039958"
	I0908 12:36:55.207239  942848 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-039958"
	I0908 12:36:55.207501  942848 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-039958"
	I0908 12:36:55.207521  942848 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-039958"
	W0908 12:36:55.207529  942848 addons.go:247] addon dashboard should already be in state true
	I0908 12:36:55.207568  942848 host.go:66] Checking if "default-k8s-diff-port-039958" exists ...
	I0908 12:36:55.207608  942848 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-039958 --format={{.State.Status}}
	I0908 12:36:55.207859  942848 out.go:179] * Verifying Kubernetes components...
	I0908 12:36:55.208037  942848 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-039958 --format={{.State.Status}}
	I0908 12:36:55.208345  942848 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-039958"
	I0908 12:36:55.208367  942848 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-039958"
	W0908 12:36:55.208375  942848 addons.go:247] addon metrics-server should already be in state true
	I0908 12:36:55.208414  942848 host.go:66] Checking if "default-k8s-diff-port-039958" exists ...
	I0908 12:36:55.208857  942848 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-039958 --format={{.State.Status}}
	I0908 12:36:55.209878  942848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:36:55.234038  942848 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-039958"
	W0908 12:36:55.234074  942848 addons.go:247] addon default-storageclass should already be in state true
	I0908 12:36:55.234113  942848 host.go:66] Checking if "default-k8s-diff-port-039958" exists ...
	I0908 12:36:55.234616  942848 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-039958 --format={{.State.Status}}
	I0908 12:36:55.234716  942848 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0908 12:36:55.235893  942848 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0908 12:36:55.235919  942848 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0908 12:36:55.235988  942848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-039958
	I0908 12:36:55.237895  942848 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 12:36:55.239291  942848 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 12:36:55.239317  942848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 12:36:55.239376  942848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-039958
	I0908 12:36:55.241236  942848 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0908 12:36:55.242448  942848 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0908 12:36:55.243535  942848 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0908 12:36:55.243556  942848 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0908 12:36:55.243627  942848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-039958
	I0908 12:36:55.259213  942848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/default-k8s-diff-port-039958/id_rsa Username:docker}
	I0908 12:36:55.261274  942848 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 12:36:55.261304  942848 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 12:36:55.261388  942848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-039958
	I0908 12:36:55.265130  942848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/default-k8s-diff-port-039958/id_rsa Username:docker}
	I0908 12:36:55.265428  942848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/default-k8s-diff-port-039958/id_rsa Username:docker}
	I0908 12:36:55.287889  942848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/default-k8s-diff-port-039958/id_rsa Username:docker}
	I0908 12:36:55.506429  942848 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0908 12:36:55.506482  942848 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0908 12:36:55.507123  942848 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0908 12:36:55.507149  942848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0908 12:36:55.676343  942848 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0908 12:36:55.676443  942848 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0908 12:36:55.679168  942848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 12:36:55.684795  942848 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0908 12:36:55.684827  942848 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0908 12:36:55.699825  942848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 12:36:55.778296  942848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 12:36:55.778917  942848 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 12:36:55.778944  942848 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0908 12:36:55.783526  942848 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0908 12:36:55.783560  942848 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0908 12:36:55.884019  942848 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0908 12:36:55.884050  942848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0908 12:36:55.885352  942848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 12:36:55.993916  942848 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0908 12:36:55.993953  942848 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0908 12:36:56.092995  942848 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0908 12:36:56.093029  942848 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0908 12:36:56.189962  942848 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0908 12:36:56.190002  942848 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0908 12:36:56.213231  942848 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0908 12:36:56.213277  942848 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0908 12:36:56.298377  942848 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0908 12:36:56.298412  942848 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0908 12:36:56.321438  942848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0908 12:36:53.758326  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:36:56.257789  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	I0908 12:37:01.113673  942848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.434453923s)
	I0908 12:37:01.113770  942848 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.413909704s)
	I0908 12:37:01.113807  942848 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-039958" to be "Ready" ...
	I0908 12:37:01.114220  942848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.335889805s)
	I0908 12:37:01.179947  942848 node_ready.go:49] node "default-k8s-diff-port-039958" is "Ready"
	I0908 12:37:01.179986  942848 node_ready.go:38] duration metric: took 66.160185ms for node "default-k8s-diff-port-039958" to be "Ready" ...
	I0908 12:37:01.180005  942848 api_server.go:52] waiting for apiserver process to appear ...
	I0908 12:37:01.180076  942848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 12:37:01.188491  942848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.303097359s)
	I0908 12:37:01.188538  942848 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-039958"
	I0908 12:37:01.188647  942848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.867162606s)
	I0908 12:37:01.190470  942848 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-039958 addons enable metrics-server
	
	I0908 12:37:01.192011  942848 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0908 12:37:01.193234  942848 addons.go:514] duration metric: took 5.986829567s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0908 12:37:01.196437  942848 api_server.go:72] duration metric: took 5.990326761s to wait for apiserver process to appear ...
	I0908 12:37:01.196458  942848 api_server.go:88] waiting for apiserver healthz status ...
	I0908 12:37:01.196476  942848 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I0908 12:37:01.201894  942848 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 12:37:01.201920  942848 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 12:36:58.258533  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:00.758093  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	I0908 12:37:01.696590  942848 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I0908 12:37:01.702086  942848 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 12:37:01.702131  942848 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 12:37:02.196683  942848 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I0908 12:37:02.203013  942848 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I0908 12:37:02.204330  942848 api_server.go:141] control plane version: v1.34.0
	I0908 12:37:02.204361  942848 api_server.go:131] duration metric: took 1.007896936s to wait for apiserver health ...
	I0908 12:37:02.204370  942848 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 12:37:02.208721  942848 system_pods.go:59] 9 kube-system pods found
	I0908 12:37:02.208782  942848 system_pods.go:61] "coredns-66bc5c9577-gb4rh" [6a5ce944-87ea-43fc-8e1b-6b7c6602d782] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 12:37:02.208795  942848 system_pods.go:61] "etcd-default-k8s-diff-port-039958" [a8523098-eac8-46a4-9c22-2a2bc216a18c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 12:37:02.208804  942848 system_pods.go:61] "kindnet-89lwp" [7f14a0af-69dc-410a-a7de-fd608eb510b7] Running
	I0908 12:37:02.208812  942848 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-039958" [94dee560-f2b4-4c2b-beb8-9f3b0042b381] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 12:37:02.208819  942848 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-039958" [570eff95-5de5-4d44-8c20-2d50d16c0d96] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 12:37:02.208831  942848 system_pods.go:61] "kube-proxy-cgrs8" [a898f47e-1688-4fb8-8f23-f1a05d5a1f33] Running
	I0908 12:37:02.208836  942848 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-039958" [d958fbef-3165-4429-b5a0-305a0ff21dc5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 12:37:02.208841  942848 system_pods.go:61] "metrics-server-746fcd58dc-hvqdm" [d648640c-2cab-4575-8290-51c39f0a19b3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 12:37:02.208844  942848 system_pods.go:61] "storage-provisioner" [89c03a4b-2580-4ee8-a683-d1523dd99fef] Running
	I0908 12:37:02.208850  942848 system_pods.go:74] duration metric: took 4.474582ms to wait for pod list to return data ...
	I0908 12:37:02.208861  942848 default_sa.go:34] waiting for default service account to be created ...
	I0908 12:37:02.211700  942848 default_sa.go:45] found service account: "default"
	I0908 12:37:02.211729  942848 default_sa.go:55] duration metric: took 2.854101ms for default service account to be created ...
	I0908 12:37:02.211739  942848 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 12:37:02.215028  942848 system_pods.go:86] 9 kube-system pods found
	I0908 12:37:02.215070  942848 system_pods.go:89] "coredns-66bc5c9577-gb4rh" [6a5ce944-87ea-43fc-8e1b-6b7c6602d782] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 12:37:02.215078  942848 system_pods.go:89] "etcd-default-k8s-diff-port-039958" [a8523098-eac8-46a4-9c22-2a2bc216a18c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 12:37:02.215083  942848 system_pods.go:89] "kindnet-89lwp" [7f14a0af-69dc-410a-a7de-fd608eb510b7] Running
	I0908 12:37:02.215088  942848 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-039958" [94dee560-f2b4-4c2b-beb8-9f3b0042b381] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 12:37:02.215095  942848 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-039958" [570eff95-5de5-4d44-8c20-2d50d16c0d96] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 12:37:02.215099  942848 system_pods.go:89] "kube-proxy-cgrs8" [a898f47e-1688-4fb8-8f23-f1a05d5a1f33] Running
	I0908 12:37:02.215105  942848 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-039958" [d958fbef-3165-4429-b5a0-305a0ff21dc5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 12:37:02.215109  942848 system_pods.go:89] "metrics-server-746fcd58dc-hvqdm" [d648640c-2cab-4575-8290-51c39f0a19b3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 12:37:02.215119  942848 system_pods.go:89] "storage-provisioner" [89c03a4b-2580-4ee8-a683-d1523dd99fef] Running
	I0908 12:37:02.215127  942848 system_pods.go:126] duration metric: took 3.381403ms to wait for k8s-apps to be running ...
	I0908 12:37:02.215134  942848 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 12:37:02.215182  942848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 12:37:02.228839  942848 system_svc.go:56] duration metric: took 13.689257ms WaitForService to wait for kubelet
	I0908 12:37:02.228878  942848 kubeadm.go:578] duration metric: took 7.022770217s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 12:37:02.228905  942848 node_conditions.go:102] verifying NodePressure condition ...
	I0908 12:37:02.232419  942848 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0908 12:37:02.232451  942848 node_conditions.go:123] node cpu capacity is 8
	I0908 12:37:02.232465  942848 node_conditions.go:105] duration metric: took 3.554674ms to run NodePressure ...
	I0908 12:37:02.232479  942848 start.go:241] waiting for startup goroutines ...
	I0908 12:37:02.232487  942848 start.go:246] waiting for cluster config update ...
	I0908 12:37:02.232498  942848 start.go:255] writing updated cluster config ...
	I0908 12:37:02.232770  942848 ssh_runner.go:195] Run: rm -f paused
	I0908 12:37:02.236948  942848 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 12:37:02.241091  942848 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gb4rh" in "kube-system" namespace to be "Ready" or be gone ...
	W0908 12:37:04.247344  942848 pod_ready.go:104] pod "coredns-66bc5c9577-gb4rh" is not "Ready", error: <nil>
	W0908 12:37:06.247957  942848 pod_ready.go:104] pod "coredns-66bc5c9577-gb4rh" is not "Ready", error: <nil>
	W0908 12:37:03.257787  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:05.757720  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:08.748018  942848 pod_ready.go:104] pod "coredns-66bc5c9577-gb4rh" is not "Ready", error: <nil>
	W0908 12:37:11.247224  942848 pod_ready.go:104] pod "coredns-66bc5c9577-gb4rh" is not "Ready", error: <nil>
	W0908 12:37:08.257784  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:10.757968  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:13.747206  942848 pod_ready.go:104] pod "coredns-66bc5c9577-gb4rh" is not "Ready", error: <nil>
	W0908 12:37:15.748096  942848 pod_ready.go:104] pod "coredns-66bc5c9577-gb4rh" is not "Ready", error: <nil>
	W0908 12:37:13.258116  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:15.757477  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:18.247360  942848 pod_ready.go:104] pod "coredns-66bc5c9577-gb4rh" is not "Ready", error: <nil>
	W0908 12:37:20.247841  942848 pod_ready.go:104] pod "coredns-66bc5c9577-gb4rh" is not "Ready", error: <nil>
	W0908 12:37:17.757872  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:19.757985  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:22.747356  942848 pod_ready.go:104] pod "coredns-66bc5c9577-gb4rh" is not "Ready", error: <nil>
	W0908 12:37:25.247866  942848 pod_ready.go:104] pod "coredns-66bc5c9577-gb4rh" is not "Ready", error: <nil>
	W0908 12:37:22.258365  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:24.757273  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:27.747272  942848 pod_ready.go:104] pod "coredns-66bc5c9577-gb4rh" is not "Ready", error: <nil>
	W0908 12:37:29.747903  942848 pod_ready.go:104] pod "coredns-66bc5c9577-gb4rh" is not "Ready", error: <nil>
	W0908 12:37:27.257759  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:29.758603  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:32.246724  942848 pod_ready.go:104] pod "coredns-66bc5c9577-gb4rh" is not "Ready", error: <nil>
	I0908 12:37:32.746600  942848 pod_ready.go:94] pod "coredns-66bc5c9577-gb4rh" is "Ready"
	I0908 12:37:32.746633  942848 pod_ready.go:86] duration metric: took 30.50551235s for pod "coredns-66bc5c9577-gb4rh" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:37:32.749803  942848 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-039958" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:37:32.754452  942848 pod_ready.go:94] pod "etcd-default-k8s-diff-port-039958" is "Ready"
	I0908 12:37:32.754481  942848 pod_ready.go:86] duration metric: took 4.650443ms for pod "etcd-default-k8s-diff-port-039958" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:37:32.757100  942848 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-039958" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:37:32.761953  942848 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-039958" is "Ready"
	I0908 12:37:32.761985  942848 pod_ready.go:86] duration metric: took 4.849995ms for pod "kube-apiserver-default-k8s-diff-port-039958" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:37:32.764191  942848 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-039958" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:37:32.945383  942848 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-039958" is "Ready"
	I0908 12:37:32.945422  942848 pod_ready.go:86] duration metric: took 181.203994ms for pod "kube-controller-manager-default-k8s-diff-port-039958" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:37:33.145058  942848 pod_ready.go:83] waiting for pod "kube-proxy-cgrs8" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:37:33.544898  942848 pod_ready.go:94] pod "kube-proxy-cgrs8" is "Ready"
	I0908 12:37:33.544927  942848 pod_ready.go:86] duration metric: took 399.833177ms for pod "kube-proxy-cgrs8" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:37:33.745634  942848 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-039958" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:37:34.144919  942848 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-039958" is "Ready"
	I0908 12:37:34.144965  942848 pod_ready.go:86] duration metric: took 399.29663ms for pod "kube-scheduler-default-k8s-diff-port-039958" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:37:34.144988  942848 pod_ready.go:40] duration metric: took 31.907998549s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 12:37:34.196309  942848 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0908 12:37:34.198553  942848 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-039958" cluster and "default" namespace by default
	W0908 12:37:32.257319  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:34.258404  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:36.757652  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:38.758275  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:41.257525  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:43.757901  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:45.758150  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:48.257273  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:50.257639  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:52.757594  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:54.758061  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:56.758378  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:59.258066  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:01.757513  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:03.758132  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:06.258116  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:08.757359  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:11.257772  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:13.258266  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:15.758076  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:18.258221  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:20.757456  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:22.757615  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:24.757944  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:27.257481  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:29.257676  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:31.757922  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:33.757998  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:35.758189  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:38.257284  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:40.258186  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:42.757563  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:44.758049  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:47.258155  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:49.758499  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:52.257549  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:54.257641  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:56.257796  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:58.758359  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:01.257901  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:03.757752  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:06.257817  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:08.258296  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:10.757713  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:13.258258  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:15.757976  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:18.257584  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:20.257682  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:22.758060  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:25.257404  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:27.257971  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:29.757975  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:32.257556  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:34.257819  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:36.757633  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:38.757871  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:41.257638  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:43.257970  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:45.757733  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:47.758232  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:49.758583  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:52.257803  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:54.257902  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:56.758212  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:59.257321  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:01.257592  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:03.757620  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:06.257707  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:08.757824  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:11.257105  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:13.257921  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:15.258039  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:17.758096  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:20.258070  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:22.757269  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:24.757608  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:27.257916  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:29.758141  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:32.257932  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:34.758358  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:37.257458  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:39.257731  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:41.758247  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:44.257810  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:46.258011  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:48.758378  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:51.257347  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:53.757974  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:56.258386  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:58.757745  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:00.758360  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:03.257917  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:05.758305  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:08.257694  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:10.757411  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:12.757802  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:15.258051  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:17.258437  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:19.758059  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:22.257165  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:24.257861  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:26.758229  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:29.257287  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:31.257940  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:33.757609  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:36.257193  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:38.257338  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:40.259086  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:42.757325  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:44.757506  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:46.757651  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:48.758048  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:51.257798  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:53.757260  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:55.758043  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:58.257673  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:00.757447  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:03.258213  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:05.758038  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:08.257935  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:10.757253  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:12.757315  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:14.758076  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:17.257358  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:19.257904  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:21.757955  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:23.758139  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:26.258024  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:28.758804  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:31.257119  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:33.257824  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:35.257908  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:37.757486  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:39.757547  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:41.757854  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:44.258038  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:46.258403  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:48.758374  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:51.257068  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:53.258291  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:55.757944  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:58.258146  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:00.258571  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:02.758004  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:05.257358  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:07.257469  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:09.258160  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:11.758090  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:14.257557  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:16.257748  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:18.258516  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:20.757930  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:23.257512  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:25.258146  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:27.757352  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:29.757963  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:32.257634  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:34.258269  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:36.758040  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:39.257138  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:41.257975  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:43.757450  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:45.758009  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:48.258119  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:50.757728  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:52.758476  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:55.258004  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:57.758245  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:00.257652  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:02.257901  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:04.758074  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:06.758528  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:09.257856  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:11.757537  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:13.758186  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:16.257671  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:18.257951  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:20.757717  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:22.758111  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:25.258291  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:27.758147  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:30.257141  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:32.257828  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:34.258099  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:36.757903  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:39.257811  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:41.757835  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:43.757896  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:46.257631  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:48.757919  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:51.258326  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:53.757689  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:56.257769  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:58.758237  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:01.257880  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:03.258125  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:05.758255  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:08.257563  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:10.758121  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:12.758503  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:15.257621  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:17.258405  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:19.758075  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:22.257402  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:24.258425  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:26.757955  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:28.758271  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:31.257461  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:33.258189  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:35.757165  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Sep 08 12:44:08 old-k8s-version-896003 crio[682]: time="2025-09-08 12:44:08.180412674Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=06a30727-b592-4872-ab39-e818e1e5e3d4 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:44:10 old-k8s-version-896003 crio[682]: time="2025-09-08 12:44:10.180289393Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=83e49653-5e81-4eb6-b016-c38e0778f4a7 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:44:10 old-k8s-version-896003 crio[682]: time="2025-09-08 12:44:10.180568749Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=83e49653-5e81-4eb6-b016-c38e0778f4a7 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:44:19 old-k8s-version-896003 crio[682]: time="2025-09-08 12:44:19.179973836Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=bb7042fe-3830-476e-881b-090e0418b866 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:44:19 old-k8s-version-896003 crio[682]: time="2025-09-08 12:44:19.180345046Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=bb7042fe-3830-476e-881b-090e0418b866 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:44:24 old-k8s-version-896003 crio[682]: time="2025-09-08 12:44:24.180658303Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=0c8b0ae2-3415-4654-a3b0-7552307a5d20 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:44:24 old-k8s-version-896003 crio[682]: time="2025-09-08 12:44:24.180993983Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=0c8b0ae2-3415-4654-a3b0-7552307a5d20 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:44:31 old-k8s-version-896003 crio[682]: time="2025-09-08 12:44:31.180019146Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=b3829bed-4a31-44a9-9263-85512b554396 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:44:31 old-k8s-version-896003 crio[682]: time="2025-09-08 12:44:31.180309570Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=b3829bed-4a31-44a9-9263-85512b554396 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:44:31 old-k8s-version-896003 crio[682]: time="2025-09-08 12:44:31.180904082Z" level=info msg="Pulling image: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=33394fe9-c44f-4726-b9a7-0814815c13aa name=/runtime.v1.ImageService/PullImage
	Sep 08 12:44:31 old-k8s-version-896003 crio[682]: time="2025-09-08 12:44:31.185659475Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 08 12:44:36 old-k8s-version-896003 crio[682]: time="2025-09-08 12:44:36.180164920Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=0ba754bb-e3ad-4fa8-b6bc-6e09b6033102 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:44:36 old-k8s-version-896003 crio[682]: time="2025-09-08 12:44:36.180470832Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=0ba754bb-e3ad-4fa8-b6bc-6e09b6033102 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:44:51 old-k8s-version-896003 crio[682]: time="2025-09-08 12:44:51.179637542Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=6b16a6f5-734f-4fa2-a5e9-3e9adeaa63ed name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:44:51 old-k8s-version-896003 crio[682]: time="2025-09-08 12:44:51.179938648Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=6b16a6f5-734f-4fa2-a5e9-3e9adeaa63ed name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:45:06 old-k8s-version-896003 crio[682]: time="2025-09-08 12:45:06.180009072Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=21e7a20e-1f57-41c3-ae4b-3bdc564023ce name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:45:06 old-k8s-version-896003 crio[682]: time="2025-09-08 12:45:06.180320086Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=21e7a20e-1f57-41c3-ae4b-3bdc564023ce name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:45:16 old-k8s-version-896003 crio[682]: time="2025-09-08 12:45:16.179700521Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=e1992c38-79f7-40d3-a41d-4ba9c94bc3a6 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:45:16 old-k8s-version-896003 crio[682]: time="2025-09-08 12:45:16.180041386Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=e1992c38-79f7-40d3-a41d-4ba9c94bc3a6 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:45:21 old-k8s-version-896003 crio[682]: time="2025-09-08 12:45:21.180574570Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=f20f6dcf-85ee-4989-ab03-d11d669f27d5 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:45:21 old-k8s-version-896003 crio[682]: time="2025-09-08 12:45:21.180802641Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=f20f6dcf-85ee-4989-ab03-d11d669f27d5 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:45:28 old-k8s-version-896003 crio[682]: time="2025-09-08 12:45:28.180453367Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=f21776c3-f28f-4060-bf0e-ae82e20698c6 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:45:28 old-k8s-version-896003 crio[682]: time="2025-09-08 12:45:28.180815085Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=f21776c3-f28f-4060-bf0e-ae82e20698c6 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:45:32 old-k8s-version-896003 crio[682]: time="2025-09-08 12:45:32.180021388Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=1c85347f-8712-4b66-ae1d-cac50554f366 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:45:32 old-k8s-version-896003 crio[682]: time="2025-09-08 12:45:32.180325606Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=1c85347f-8712-4b66-ae1d-cac50554f366 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	b0a644c606307       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   3 minutes ago       Exited              dashboard-metrics-scraper   6                   ffa1487a3cd2f       dashboard-metrics-scraper-5f989dc9cf-f4rk8
	9ade55b3edc9b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner         2                   cda6e33963d05       storage-provisioner
	7c8de1f45f71b       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                     1                   2bf6958df81d0       coredns-5dd5756b68-99vrp
	0704edfb12400       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c   9 minutes ago       Running             busybox                     1                   0af1ba0130ec7       busybox
	0c68b4da41592       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a   9 minutes ago       Running             kube-proxy                  1                   be167abb1bf28       kube-proxy-sptvq
	1c160ab3dfbcb       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   9 minutes ago       Running             kindnet-cni                 1                   5c9555d0f23a6       kindnet-bx9xt
	1bd62cd3b9358       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Exited              storage-provisioner         1                   cda6e33963d05       storage-provisioner
	765b8ead0b1e5       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                        1                   f6dce38c48dec       etcd-old-k8s-version-896003
	88a0d1934bbfb       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157   9 minutes ago       Running             kube-scheduler              1                   03f7d3ad2afe1       kube-scheduler-old-k8s-version-896003
	686c36edfbd4c       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95   9 minutes ago       Running             kube-apiserver              1                   9aa1635509644       kube-apiserver-old-k8s-version-896003
	9bad1ae1ad18d       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62   9 minutes ago       Running             kube-controller-manager     1                   5ef8f1cec4fb6       kube-controller-manager-old-k8s-version-896003
	
	
	==> coredns [7c8de1f45f71bd48650af20abb5c2aa28a751d0ecaf303e41e6931cd0e115b0b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55804 - 62028 "HINFO IN 8521887911870039379.8223469923305335544. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.040339135s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-896003
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-896003
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a399eb27affc71ce2737faeeac659fc2ce938c64
	                    minikube.k8s.io/name=old-k8s-version-896003
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T12_34_58_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 12:34:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-896003
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 12:45:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 12:42:10 +0000   Mon, 08 Sep 2025 12:34:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 12:42:10 +0000   Mon, 08 Sep 2025 12:34:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 12:42:10 +0000   Mon, 08 Sep 2025 12:34:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 12:42:10 +0000   Mon, 08 Sep 2025 12:35:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-896003
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	System Info:
	  Machine ID:                 538c8e0f78b14b9e92ebf3d6eac1995d
	  System UUID:                4c49eb36-4aee-4dca-981a-0dd58e95d17d
	  Boot ID:                    1bb31c1a-3b78-4ea1-9977-d0689f279875
	  Kernel Version:             5.15.0-1083-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-5dd5756b68-99vrp                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     10m
	  kube-system                 etcd-old-k8s-version-896003                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kindnet-bx9xt                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-old-k8s-version-896003             250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-old-k8s-version-896003    200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-sptvq                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-old-k8s-version-896003             100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-57f55c9bc5-z5rkf                   100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         10m
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-f4rk8        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m25s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-7dbrb             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 9m35s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node old-k8s-version-896003 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node old-k8s-version-896003 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node old-k8s-version-896003 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                    node-controller  Node old-k8s-version-896003 event: Registered Node old-k8s-version-896003 in Controller
	  Normal  NodeReady                10m                    kubelet          Node old-k8s-version-896003 status is now: NodeReady
	  Normal  Starting                 9m42s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m42s (x8 over 9m42s)  kubelet          Node old-k8s-version-896003 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m42s (x8 over 9m42s)  kubelet          Node old-k8s-version-896003 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m42s (x8 over 9m42s)  kubelet          Node old-k8s-version-896003 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m25s                  node-controller  Node old-k8s-version-896003 event: Registered Node old-k8s-version-896003 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: fa 3c ce 84 95 92 8a f7 86 36 20 87 08 00
	[  +0.000689] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4fa905f06d1f
	[  +0.000005] ll header: 00000000: fa 3c ce 84 95 92 8a f7 86 36 20 87 08 00
	[Sep 8 12:37] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4fa905f06d1f
	[  +0.000008] ll header: 00000000: fa 3c ce 84 95 92 8a f7 86 36 20 87 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4fa905f06d1f
	[  +0.000002] ll header: 00000000: fa 3c ce 84 95 92 8a f7 86 36 20 87 08 00
	[  +2.015830] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4fa905f06d1f
	[  +0.000007] ll header: 00000000: fa 3c ce 84 95 92 8a f7 86 36 20 87 08 00
	[  +0.004384] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4fa905f06d1f
	[  +0.000005] ll header: 00000000: fa 3c ce 84 95 92 8a f7 86 36 20 87 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4fa905f06d1f
	[  +0.000002] ll header: 00000000: fa 3c ce 84 95 92 8a f7 86 36 20 87 08 00
	[  +4.123250] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4fa905f06d1f
	[  +0.000008] ll header: 00000000: fa 3c ce 84 95 92 8a f7 86 36 20 87 08 00
	[  +0.003960] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4fa905f06d1f
	[  +0.000006] ll header: 00000000: fa 3c ce 84 95 92 8a f7 86 36 20 87 08 00
	[  +0.000008] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4fa905f06d1f
	[  +0.000002] ll header: 00000000: fa 3c ce 84 95 92 8a f7 86 36 20 87 08 00
	[  +8.187331] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4fa905f06d1f
	[  +0.000006] ll header: 00000000: fa 3c ce 84 95 92 8a f7 86 36 20 87 08 00
	[  +0.003987] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4fa905f06d1f
	[  +0.000005] ll header: 00000000: fa 3c ce 84 95 92 8a f7 86 36 20 87 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4fa905f06d1f
	[  +0.000002] ll header: 00000000: fa 3c ce 84 95 92 8a f7 86 36 20 87 08 00
	
	
	==> etcd [765b8ead0b1e57d46872ecd1fe55b1d74c03da4a25a08595dd8d85d10231825b] <==
	{"level":"info","ts":"2025-09-08T12:35:59.584695Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-09-08T12:35:59.584731Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-09-08T12:35:59.585087Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-09-08T12:35:59.585224Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-09-08T12:35:59.587344Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-08T12:35:59.587494Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-08T12:35:59.595064Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-09-08T12:35:59.595208Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-09-08T12:35:59.595447Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-09-08T12:35:59.595496Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-09-08T12:35:59.59556Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-09-08T12:36:00.695339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-09-08T12:36:00.695416Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-09-08T12:36:00.695451Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-09-08T12:36:00.695464Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-09-08T12:36:00.69547Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-09-08T12:36:00.695479Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-09-08T12:36:00.695487Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-09-08T12:36:00.698152Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-08T12:36:00.698173Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-08T12:36:00.698164Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-896003 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-09-08T12:36:00.69849Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-09-08T12:36:00.698589Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-09-08T12:36:00.699604Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-09-08T12:36:00.699727Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 12:45:40 up  3:28,  0 users,  load average: 0.33, 0.95, 1.64
	Linux old-k8s-version-896003 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [1c160ab3dfbcb6f27d27c42821d725d0836924c6d21d22cdb9cccd0a8f308e99] <==
	I0908 12:43:34.991868       1 main.go:301] handling current node
	I0908 12:43:44.985117       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 12:43:44.985158       1 main.go:301] handling current node
	I0908 12:43:54.993943       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 12:43:54.993986       1 main.go:301] handling current node
	I0908 12:44:04.991839       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 12:44:04.991877       1 main.go:301] handling current node
	I0908 12:44:14.985658       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 12:44:14.985701       1 main.go:301] handling current node
	I0908 12:44:24.992589       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 12:44:24.992625       1 main.go:301] handling current node
	I0908 12:44:34.994521       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 12:44:34.994559       1 main.go:301] handling current node
	I0908 12:44:44.985245       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 12:44:44.985292       1 main.go:301] handling current node
	I0908 12:44:54.991768       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 12:44:54.991805       1 main.go:301] handling current node
	I0908 12:45:04.991762       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 12:45:04.991795       1 main.go:301] handling current node
	I0908 12:45:14.985750       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 12:45:14.985787       1 main.go:301] handling current node
	I0908 12:45:24.991764       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 12:45:24.991814       1 main.go:301] handling current node
	I0908 12:45:34.990593       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 12:45:34.990631       1 main.go:301] handling current node
	
	
	==> kube-apiserver [686c36edfbd4c19d3aedb2cf3c30545af99cf261f70a6e93a943cf7b7b113a52] <==
	E0908 12:41:03.489180       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0908 12:41:03.490304       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0908 12:42:02.292482       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.103.220.17:443: connect: connection refused
	I0908 12:42:02.292509       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0908 12:42:03.490064       1 handler_proxy.go:93] no RequestInfo found in the context
	E0908 12:42:03.490111       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0908 12:42:03.490122       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0908 12:42:03.491088       1 handler_proxy.go:93] no RequestInfo found in the context
	E0908 12:42:03.491168       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0908 12:42:03.491179       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0908 12:43:02.293294       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.103.220.17:443: connect: connection refused
	I0908 12:43:02.293325       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0908 12:44:02.293167       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.103.220.17:443: connect: connection refused
	I0908 12:44:02.293197       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0908 12:44:03.490862       1 handler_proxy.go:93] no RequestInfo found in the context
	E0908 12:44:03.490903       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0908 12:44:03.490911       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0908 12:44:03.492196       1 handler_proxy.go:93] no RequestInfo found in the context
	E0908 12:44:03.492291       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0908 12:44:03.492299       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0908 12:45:02.292544       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.103.220.17:443: connect: connection refused
	I0908 12:45:02.292574       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [9bad1ae1ad18d77f1332b137f3066d0ac9c00dc3716df3e03d8ba5389cf02778] <==
	I0908 12:41:16.463182       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0908 12:41:45.971327       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0908 12:41:46.471046       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0908 12:41:54.190490       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="113.081µs"
	I0908 12:42:09.191759       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="139.66µs"
	E0908 12:42:15.976429       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0908 12:42:16.479703       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0908 12:42:39.170112       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="126.996µs"
	E0908 12:42:45.981388       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0908 12:42:46.256054       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="95.588µs"
	I0908 12:42:46.488141       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0908 12:42:54.191869       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="125.802µs"
	I0908 12:43:07.190099       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="94.67µs"
	E0908 12:43:15.986956       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0908 12:43:16.497158       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0908 12:43:45.992196       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0908 12:43:46.504499       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0908 12:44:15.996935       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0908 12:44:16.512313       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0908 12:44:46.001869       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0908 12:44:46.520326       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0908 12:45:16.007083       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0908 12:45:16.190180       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="131.467µs"
	I0908 12:45:16.528916       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0908 12:45:28.191489       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="114.039µs"
	
	
	==> kube-proxy [0c68b4da41592ce102d0e714b7c537d8cad9cfb5f1437c23eafc3286d0018350] <==
	I0908 12:36:04.894223       1 server_others.go:69] "Using iptables proxy"
	I0908 12:36:04.904040       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I0908 12:36:04.991644       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 12:36:04.994927       1 server_others.go:152] "Using iptables Proxier"
	I0908 12:36:04.994980       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0908 12:36:04.994991       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0908 12:36:04.995030       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0908 12:36:04.995285       1 server.go:846] "Version info" version="v1.28.0"
	I0908 12:36:04.995310       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 12:36:04.996234       1 config.go:188] "Starting service config controller"
	I0908 12:36:04.996313       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0908 12:36:04.996351       1 config.go:315] "Starting node config controller"
	I0908 12:36:04.996356       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0908 12:36:04.996764       1 config.go:97] "Starting endpoint slice config controller"
	I0908 12:36:04.997489       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0908 12:36:05.096889       1 shared_informer.go:318] Caches are synced for node config
	I0908 12:36:05.096916       1 shared_informer.go:318] Caches are synced for service config
	I0908 12:36:05.097583       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [88a0d1934bbfb7a8b2abe2c0924c6955fe6827025981401e413cc0fbb6ad8ac8] <==
	I0908 12:36:00.678196       1 serving.go:348] Generated self-signed cert in-memory
	W0908 12:36:02.402767       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0908 12:36:02.402903       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0908 12:36:02.402924       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0908 12:36:02.402933       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0908 12:36:02.497485       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I0908 12:36:02.497608       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 12:36:02.499434       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 12:36:02.499532       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0908 12:36:02.501293       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0908 12:36:02.501381       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0908 12:36:02.599869       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 08 12:44:19 old-k8s-version-896003 kubelet[829]: E0908 12:44:19.180711     829 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7dbrb" podUID="4704cc58-09c7-49d5-a649-d7e9fd6c1297"
	Sep 08 12:44:21 old-k8s-version-896003 kubelet[829]: I0908 12:44:21.179419     829 scope.go:117] "RemoveContainer" containerID="b0a644c60630797ff76191cdd3c418e4bbe3772496939698bd0e32d8ca5ca7d3"
	Sep 08 12:44:21 old-k8s-version-896003 kubelet[829]: E0908 12:44:21.179765     829 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f4rk8_kubernetes-dashboard(835b1d48-6be4-48ad-b1c3-c24a581d31d5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f4rk8" podUID="835b1d48-6be4-48ad-b1c3-c24a581d31d5"
	Sep 08 12:44:24 old-k8s-version-896003 kubelet[829]: E0908 12:44:24.181319     829 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-z5rkf" podUID="e61b86c4-79f9-424d-99b1-7f532948e25a"
	Sep 08 12:44:36 old-k8s-version-896003 kubelet[829]: I0908 12:44:36.179615     829 scope.go:117] "RemoveContainer" containerID="b0a644c60630797ff76191cdd3c418e4bbe3772496939698bd0e32d8ca5ca7d3"
	Sep 08 12:44:36 old-k8s-version-896003 kubelet[829]: E0908 12:44:36.180041     829 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f4rk8_kubernetes-dashboard(835b1d48-6be4-48ad-b1c3-c24a581d31d5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f4rk8" podUID="835b1d48-6be4-48ad-b1c3-c24a581d31d5"
	Sep 08 12:44:36 old-k8s-version-896003 kubelet[829]: E0908 12:44:36.180680     829 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-z5rkf" podUID="e61b86c4-79f9-424d-99b1-7f532948e25a"
	Sep 08 12:44:49 old-k8s-version-896003 kubelet[829]: I0908 12:44:49.179193     829 scope.go:117] "RemoveContainer" containerID="b0a644c60630797ff76191cdd3c418e4bbe3772496939698bd0e32d8ca5ca7d3"
	Sep 08 12:44:49 old-k8s-version-896003 kubelet[829]: E0908 12:44:49.179484     829 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f4rk8_kubernetes-dashboard(835b1d48-6be4-48ad-b1c3-c24a581d31d5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f4rk8" podUID="835b1d48-6be4-48ad-b1c3-c24a581d31d5"
	Sep 08 12:44:51 old-k8s-version-896003 kubelet[829]: E0908 12:44:51.180318     829 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-z5rkf" podUID="e61b86c4-79f9-424d-99b1-7f532948e25a"
	Sep 08 12:45:00 old-k8s-version-896003 kubelet[829]: I0908 12:45:00.179263     829 scope.go:117] "RemoveContainer" containerID="b0a644c60630797ff76191cdd3c418e4bbe3772496939698bd0e32d8ca5ca7d3"
	Sep 08 12:45:00 old-k8s-version-896003 kubelet[829]: E0908 12:45:00.179628     829 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f4rk8_kubernetes-dashboard(835b1d48-6be4-48ad-b1c3-c24a581d31d5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f4rk8" podUID="835b1d48-6be4-48ad-b1c3-c24a581d31d5"
	Sep 08 12:45:01 old-k8s-version-896003 kubelet[829]: E0908 12:45:01.280531     829 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 08 12:45:01 old-k8s-version-896003 kubelet[829]: E0908 12:45:01.280593     829 kuberuntime_image.go:53] "Failed to pull image" err="reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 08 12:45:01 old-k8s-version-896003 kubelet[829]: E0908 12:45:01.280730     829 kuberuntime_manager.go:1209] container &Container{Name:kubernetes-dashboard,Image:docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Command:[],Args:[--namespace=kubernetes-dashboard --enable-skip-login --disable-settings-authorizer],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:9090,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-volume,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-zpgmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 9090 },Host:,Scheme:HTTP,HTTP
Headers:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:*2001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kubernetes-dashboard-8694d4445c-7dbrb_kubernetes-dashboard(4704cc58-09c7-49d5-a649-d7e9fd6c1297): ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have re
ached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Sep 08 12:45:01 old-k8s-version-896003 kubelet[829]: E0908 12:45:01.280798     829 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7dbrb" podUID="4704cc58-09c7-49d5-a649-d7e9fd6c1297"
	Sep 08 12:45:06 old-k8s-version-896003 kubelet[829]: E0908 12:45:06.180591     829 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-z5rkf" podUID="e61b86c4-79f9-424d-99b1-7f532948e25a"
	Sep 08 12:45:15 old-k8s-version-896003 kubelet[829]: I0908 12:45:15.179199     829 scope.go:117] "RemoveContainer" containerID="b0a644c60630797ff76191cdd3c418e4bbe3772496939698bd0e32d8ca5ca7d3"
	Sep 08 12:45:15 old-k8s-version-896003 kubelet[829]: E0908 12:45:15.179537     829 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f4rk8_kubernetes-dashboard(835b1d48-6be4-48ad-b1c3-c24a581d31d5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f4rk8" podUID="835b1d48-6be4-48ad-b1c3-c24a581d31d5"
	Sep 08 12:45:16 old-k8s-version-896003 kubelet[829]: E0908 12:45:16.180306     829 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7dbrb" podUID="4704cc58-09c7-49d5-a649-d7e9fd6c1297"
	Sep 08 12:45:21 old-k8s-version-896003 kubelet[829]: E0908 12:45:21.181165     829 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-z5rkf" podUID="e61b86c4-79f9-424d-99b1-7f532948e25a"
	Sep 08 12:45:27 old-k8s-version-896003 kubelet[829]: I0908 12:45:27.179367     829 scope.go:117] "RemoveContainer" containerID="b0a644c60630797ff76191cdd3c418e4bbe3772496939698bd0e32d8ca5ca7d3"
	Sep 08 12:45:27 old-k8s-version-896003 kubelet[829]: E0908 12:45:27.179733     829 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f4rk8_kubernetes-dashboard(835b1d48-6be4-48ad-b1c3-c24a581d31d5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f4rk8" podUID="835b1d48-6be4-48ad-b1c3-c24a581d31d5"
	Sep 08 12:45:28 old-k8s-version-896003 kubelet[829]: E0908 12:45:28.181193     829 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7dbrb" podUID="4704cc58-09c7-49d5-a649-d7e9fd6c1297"
	Sep 08 12:45:32 old-k8s-version-896003 kubelet[829]: E0908 12:45:32.180611     829 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-z5rkf" podUID="e61b86c4-79f9-424d-99b1-7f532948e25a"
	
	
	==> storage-provisioner [1bd62cd3b9358589d46a2c7f83c0c54f1db5ed8e60b9e29d91bac001cbe526f0] <==
	I0908 12:36:04.393115       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0908 12:36:34.398346       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [9ade55b3edc9b61d1c8fbfa6355ede882ed6cd8a05cffb6843f0fd0bf3141da1] <==
	I0908 12:36:35.441224       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0908 12:36:35.451066       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0908 12:36:35.451115       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0908 12:36:52.856950       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0908 12:36:52.857028       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c064b846-3cb1-463b-aa29-6c180848f227", APIVersion:"v1", ResourceVersion:"672", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-896003_be92447a-a74a-4547-9621-02ef7c155c81 became leader
	I0908 12:36:52.857156       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-896003_be92447a-a74a-4547-9621-02ef7c155c81!
	I0908 12:36:52.957766       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-896003_be92447a-a74a-4547-9621-02ef7c155c81!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-896003 -n old-k8s-version-896003
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-896003 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-57f55c9bc5-z5rkf kubernetes-dashboard-8694d4445c-7dbrb
helpers_test.go:282: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context old-k8s-version-896003 describe pod metrics-server-57f55c9bc5-z5rkf kubernetes-dashboard-8694d4445c-7dbrb
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context old-k8s-version-896003 describe pod metrics-server-57f55c9bc5-z5rkf kubernetes-dashboard-8694d4445c-7dbrb: exit status 1 (62.702153ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-z5rkf" not found
	Error from server (NotFound): pods "kubernetes-dashboard-8694d4445c-7dbrb" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context old-k8s-version-896003 describe pod metrics-server-57f55c9bc5-z5rkf kubernetes-dashboard-8694d4445c-7dbrb: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-vwd7n" [537a29c5-ffc1-49e3-8a70-737656b3a999] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0908 12:36:59.761579  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/auto-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:37:31.353410  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/kindnet-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-997730 -n no-preload-997730
start_stop_delete_test.go:272: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-09-08 12:45:54.703315216 +0000 UTC m=+4373.996057695
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context no-preload-997730 describe po kubernetes-dashboard-855c9754f9-vwd7n -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context no-preload-997730 describe po kubernetes-dashboard-855c9754f9-vwd7n -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-vwd7n
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             no-preload-997730/192.168.94.2
Start Time:       Mon, 08 Sep 2025 12:36:19 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ln5gl (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-ln5gl:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  9m35s                  default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vwd7n to no-preload-997730
Normal   Pulling    4m33s (x5 over 9m35s)  kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     4m3s (x5 over 9m5s)    kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     4m3s (x5 over 9m5s)    kubelet            Error: ErrImagePull
Warning  Failed     2m54s (x16 over 9m5s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    111s (x21 over 9m5s)   kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context no-preload-997730 logs kubernetes-dashboard-855c9754f9-vwd7n -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context no-preload-997730 logs kubernetes-dashboard-855c9754f9-vwd7n -n kubernetes-dashboard: exit status 1 (77.208534ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-vwd7n" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context no-preload-997730 logs kubernetes-dashboard-855c9754f9-vwd7n -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-997730
helpers_test.go:243: (dbg) docker inspect no-preload-997730:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b7f8cab201e63b95b844710cef08df1a17b03f2b8714f96097cea6422e7151fb",
	        "Created": "2025-09-08T12:34:43.172154041Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 938893,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-08T12:36:02.231771503Z",
	            "FinishedAt": "2025-09-08T12:36:01.350027288Z"
	        },
	        "Image": "sha256:863fa02c4a7dcd4571b30c16c1e6ae3eaa1d1a904931aac9472133ae3328c098",
	        "ResolvConfPath": "/var/lib/docker/containers/b7f8cab201e63b95b844710cef08df1a17b03f2b8714f96097cea6422e7151fb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b7f8cab201e63b95b844710cef08df1a17b03f2b8714f96097cea6422e7151fb/hostname",
	        "HostsPath": "/var/lib/docker/containers/b7f8cab201e63b95b844710cef08df1a17b03f2b8714f96097cea6422e7151fb/hosts",
	        "LogPath": "/var/lib/docker/containers/b7f8cab201e63b95b844710cef08df1a17b03f2b8714f96097cea6422e7151fb/b7f8cab201e63b95b844710cef08df1a17b03f2b8714f96097cea6422e7151fb-json.log",
	        "Name": "/no-preload-997730",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-997730:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-997730",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b7f8cab201e63b95b844710cef08df1a17b03f2b8714f96097cea6422e7151fb",
	                "LowerDir": "/var/lib/docker/overlay2/e34fbca8234696e511e5798b67fe0066e07e799a01b931170e8f33aea364697f-init/diff:/var/lib/docker/overlay2/e9bf8e09f770f60ed4c810e0202629dccf6a4c304822cce5a8dffd900ae12eff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e34fbca8234696e511e5798b67fe0066e07e799a01b931170e8f33aea364697f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e34fbca8234696e511e5798b67fe0066e07e799a01b931170e8f33aea364697f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e34fbca8234696e511e5798b67fe0066e07e799a01b931170e8f33aea364697f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-997730",
	                "Source": "/var/lib/docker/volumes/no-preload-997730/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-997730",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-997730",
	                "name.minikube.sigs.k8s.io": "no-preload-997730",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a8d9936158e25c459552b9ad67c50d23cf1343416009b9731b034684b1af9d78",
	            "SandboxKey": "/var/run/docker/netns/a8d9936158e2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33478"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33479"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33482"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33480"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33481"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-997730": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:11:f7:fb:5b:36",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "34c70443944eae5e5bf18c5289547ab218c10406c1c1860d95139a16069c0d1e",
	                    "EndpointID": "7495518a83442d35741d2e0362b38a949419c13b26b958566d9ac3fe24c8edf8",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-997730",
	                        "b7f8cab201e6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-997730 -n no-preload-997730
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-997730 logs -n 25
E0908 12:45:56.269183  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-997730 logs -n 25: (1.268065432s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-283124 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ bridge-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:34 UTC │ 08 Sep 25 12:34 UTC │
	│ ssh     │ -p bridge-283124 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:34 UTC │ 08 Sep 25 12:34 UTC │
	│ ssh     │ -p bridge-283124 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:34 UTC │                     │
	│ ssh     │ -p bridge-283124 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:34 UTC │ 08 Sep 25 12:34 UTC │
	│ ssh     │ -p bridge-283124 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:34 UTC │ 08 Sep 25 12:35 UTC │
	│ ssh     │ -p bridge-283124 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:35 UTC │
	│ ssh     │ -p bridge-283124 sudo containerd config dump                                                                                                                                                                                                  │ bridge-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:35 UTC │
	│ ssh     │ -p bridge-283124 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:35 UTC │
	│ ssh     │ -p bridge-283124 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:35 UTC │
	│ ssh     │ -p bridge-283124 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:35 UTC │
	│ ssh     │ -p bridge-283124 sudo crio config                                                                                                                                                                                                             │ bridge-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:35 UTC │
	│ delete  │ -p bridge-283124                                                                                                                                                                                                                              │ bridge-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:35 UTC │
	│ start   │ -p default-k8s-diff-port-039958 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-039958 │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:36 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-896003 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-896003       │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:35 UTC │
	│ stop    │ -p old-k8s-version-896003 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-896003       │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:35 UTC │
	│ addons  │ enable metrics-server -p no-preload-997730 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-997730            │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:35 UTC │
	│ stop    │ -p no-preload-997730 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-997730            │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:36 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-896003 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-896003       │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:35 UTC │
	│ start   │ -p old-k8s-version-896003 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-896003       │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:36 UTC │
	│ addons  │ enable dashboard -p no-preload-997730 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-997730            │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
	│ start   │ -p no-preload-997730 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                       │ no-preload-997730            │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-039958 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-039958 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
	│ stop    │ -p default-k8s-diff-port-039958 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-039958 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-039958 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-039958 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
	│ start   │ -p default-k8s-diff-port-039958 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-039958 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:37 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 12:36:46
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 12:36:46.576701  942848 out.go:360] Setting OutFile to fd 1 ...
	I0908 12:36:46.576859  942848 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:36:46.576870  942848 out.go:374] Setting ErrFile to fd 2...
	I0908 12:36:46.576877  942848 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:36:46.577119  942848 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-614854/.minikube/bin
	I0908 12:36:46.577715  942848 out.go:368] Setting JSON to false
	I0908 12:36:46.579062  942848 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":11951,"bootTime":1757323056,"procs":302,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 12:36:46.579193  942848 start.go:140] virtualization: kvm guest
	I0908 12:36:46.581327  942848 out.go:179] * [default-k8s-diff-port-039958] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 12:36:46.582661  942848 out.go:179]   - MINIKUBE_LOCATION=21512
	I0908 12:36:46.582698  942848 notify.go:220] Checking for updates...
	I0908 12:36:46.584965  942848 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 12:36:46.586098  942848 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21512-614854/kubeconfig
	I0908 12:36:46.587326  942848 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-614854/.minikube
	I0908 12:36:46.588738  942848 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 12:36:46.590003  942848 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 12:36:46.591594  942848 config.go:182] Loaded profile config "default-k8s-diff-port-039958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 12:36:46.592226  942848 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 12:36:46.618634  942848 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 12:36:46.618773  942848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:36:46.679298  942848 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-09-08 12:36:46.668942756 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 12:36:46.679407  942848 docker.go:318] overlay module found
	I0908 12:36:46.681016  942848 out.go:179] * Using the docker driver based on existing profile
	I0908 12:36:46.682334  942848 start.go:304] selected driver: docker
	I0908 12:36:46.682353  942848 start.go:918] validating driver "docker" against &{Name:default-k8s-diff-port-039958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-039958 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountS
tring: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:36:46.682476  942848 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 12:36:46.683426  942848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:36:46.745282  942848 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-09-08 12:36:46.73243227 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 12:36:46.745663  942848 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 12:36:46.745700  942848 cni.go:84] Creating CNI manager for ""
	I0908 12:36:46.745763  942848 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 12:36:46.745814  942848 start.go:348] cluster config:
	{Name:default-k8s-diff-port-039958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-039958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:36:46.747972  942848 out.go:179] * Starting "default-k8s-diff-port-039958" primary control-plane node in "default-k8s-diff-port-039958" cluster
	I0908 12:36:46.749230  942848 cache.go:123] Beginning downloading kic base image for docker with crio
	I0908 12:36:46.750628  942848 out.go:179] * Pulling base image v0.0.47-1756980985-21488 ...
	I0908 12:36:46.751931  942848 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 12:36:46.751992  942848 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 12:36:46.752002  942848 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21512-614854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0908 12:36:46.752111  942848 cache.go:58] Caching tarball of preloaded images
	I0908 12:36:46.752219  942848 preload.go:172] Found /home/jenkins/minikube-integration/21512-614854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0908 12:36:46.752258  942848 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0908 12:36:46.752419  942848 profile.go:143] Saving config to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/default-k8s-diff-port-039958/config.json ...
	I0908 12:36:46.780591  942848 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon, skipping pull
	I0908 12:36:46.780624  942848 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in daemon, skipping load
	I0908 12:36:46.780647  942848 cache.go:232] Successfully downloaded all kic artifacts
	I0908 12:36:46.780682  942848 start.go:360] acquireMachinesLock for default-k8s-diff-port-039958: {Name:mk74fa9073ebc792abfeccea0efe5ebf172e66a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 12:36:46.780761  942848 start.go:364] duration metric: took 51.375µs to acquireMachinesLock for "default-k8s-diff-port-039958"
	I0908 12:36:46.780788  942848 start.go:96] Skipping create...Using existing machine configuration
	I0908 12:36:46.780799  942848 fix.go:54] fixHost starting: 
	I0908 12:36:46.781129  942848 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-039958 --format={{.State.Status}}
	I0908 12:36:46.803941  942848 fix.go:112] recreateIfNeeded on default-k8s-diff-port-039958: state=Stopped err=<nil>
	W0908 12:36:46.803983  942848 fix.go:138] unexpected machine state, will restart: <nil>
	W0908 12:36:42.758116  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:36:45.258155  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:36:42.045527  938712 pod_ready.go:104] pod "coredns-66bc5c9577-nd9km" is not "Ready", error: <nil>
	W0908 12:36:44.545681  938712 pod_ready.go:104] pod "coredns-66bc5c9577-nd9km" is not "Ready", error: <nil>
	W0908 12:36:46.546066  938712 pod_ready.go:104] pod "coredns-66bc5c9577-nd9km" is not "Ready", error: <nil>
	I0908 12:36:46.806070  942848 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-039958" ...
	I0908 12:36:46.806212  942848 cli_runner.go:164] Run: docker start default-k8s-diff-port-039958
	I0908 12:36:47.111853  942848 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-039958 --format={{.State.Status}}
	I0908 12:36:47.137411  942848 kic.go:430] container "default-k8s-diff-port-039958" state is running.
	I0908 12:36:47.137907  942848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-039958
	I0908 12:36:47.162432  942848 profile.go:143] Saving config to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/default-k8s-diff-port-039958/config.json ...
	I0908 12:36:47.162670  942848 machine.go:93] provisionDockerMachine start ...
	I0908 12:36:47.162747  942848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-039958
	I0908 12:36:47.185220  942848 main.go:141] libmachine: Using SSH client type: native
	I0908 12:36:47.185582  942848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I0908 12:36:47.185597  942848 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 12:36:47.186433  942848 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51208->127.0.0.1:33483: read: connection reset by peer
	I0908 12:36:50.319771  942848 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-039958
	
	I0908 12:36:50.319812  942848 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-039958"
	I0908 12:36:50.319874  942848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-039958
	I0908 12:36:50.341500  942848 main.go:141] libmachine: Using SSH client type: native
	I0908 12:36:50.341753  942848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I0908 12:36:50.341765  942848 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-039958 && echo "default-k8s-diff-port-039958" | sudo tee /etc/hostname
	I0908 12:36:50.492659  942848 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-039958
	
	I0908 12:36:50.492756  942848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-039958
	I0908 12:36:50.516857  942848 main.go:141] libmachine: Using SSH client type: native
	I0908 12:36:50.517256  942848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I0908 12:36:50.517301  942848 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-039958' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-039958/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-039958' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 12:36:50.644286  942848 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 12:36:50.644321  942848 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21512-614854/.minikube CaCertPath:/home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21512-614854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21512-614854/.minikube}
	I0908 12:36:50.644344  942848 ubuntu.go:190] setting up certificates
	I0908 12:36:50.644356  942848 provision.go:84] configureAuth start
	I0908 12:36:50.644424  942848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-039958
	I0908 12:36:50.662342  942848 provision.go:143] copyHostCerts
	I0908 12:36:50.662414  942848 exec_runner.go:144] found /home/jenkins/minikube-integration/21512-614854/.minikube/cert.pem, removing ...
	I0908 12:36:50.662431  942848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21512-614854/.minikube/cert.pem
	I0908 12:36:50.662496  942848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21512-614854/.minikube/cert.pem (1123 bytes)
	I0908 12:36:50.662596  942848 exec_runner.go:144] found /home/jenkins/minikube-integration/21512-614854/.minikube/key.pem, removing ...
	I0908 12:36:50.662605  942848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21512-614854/.minikube/key.pem
	I0908 12:36:50.662630  942848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21512-614854/.minikube/key.pem (1675 bytes)
	I0908 12:36:50.662714  942848 exec_runner.go:144] found /home/jenkins/minikube-integration/21512-614854/.minikube/ca.pem, removing ...
	I0908 12:36:50.662722  942848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21512-614854/.minikube/ca.pem
	I0908 12:36:50.662742  942848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21512-614854/.minikube/ca.pem (1082 bytes)
	I0908 12:36:50.662805  942848 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21512-614854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-039958 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-039958 localhost minikube]
	I0908 12:36:50.862531  942848 provision.go:177] copyRemoteCerts
	I0908 12:36:50.862604  942848 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 12:36:50.862646  942848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-039958
	I0908 12:36:50.885478  942848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/default-k8s-diff-port-039958/id_rsa Username:docker}
	I0908 12:36:50.986239  942848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 12:36:51.016291  942848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0908 12:36:51.045268  942848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0908 12:36:51.069675  942848 provision.go:87] duration metric: took 425.304221ms to configureAuth
	I0908 12:36:51.069704  942848 ubuntu.go:206] setting minikube options for container-runtime
	I0908 12:36:51.069902  942848 config.go:182] Loaded profile config "default-k8s-diff-port-039958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 12:36:51.070014  942848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-039958
	I0908 12:36:51.094609  942848 main.go:141] libmachine: Using SSH client type: native
	I0908 12:36:51.094825  942848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I0908 12:36:51.094845  942848 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0908 12:36:51.430315  942848 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0908 12:36:51.430362  942848 machine.go:96] duration metric: took 4.267670025s to provisionDockerMachine
	I0908 12:36:51.430380  942848 start.go:293] postStartSetup for "default-k8s-diff-port-039958" (driver="docker")
	I0908 12:36:51.430395  942848 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 12:36:51.430518  942848 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 12:36:51.430587  942848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-039958
	I0908 12:36:51.451170  942848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/default-k8s-diff-port-039958/id_rsa Username:docker}
	I0908 12:36:51.543737  942848 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 12:36:51.548216  942848 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0908 12:36:51.548260  942848 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0908 12:36:51.548273  942848 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0908 12:36:51.548282  942848 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0908 12:36:51.548296  942848 filesync.go:126] Scanning /home/jenkins/minikube-integration/21512-614854/.minikube/addons for local assets ...
	I0908 12:36:51.548366  942848 filesync.go:126] Scanning /home/jenkins/minikube-integration/21512-614854/.minikube/files for local assets ...
	I0908 12:36:51.548469  942848 filesync.go:149] local asset: /home/jenkins/minikube-integration/21512-614854/.minikube/files/etc/ssl/certs/6186202.pem -> 6186202.pem in /etc/ssl/certs
	I0908 12:36:51.548587  942848 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 12:36:51.558329  942848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/files/etc/ssl/certs/6186202.pem --> /etc/ssl/certs/6186202.pem (1708 bytes)
	W0908 12:36:47.258256  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:36:49.757606  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:36:51.758239  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:36:49.046133  938712 pod_ready.go:104] pod "coredns-66bc5c9577-nd9km" is not "Ready", error: <nil>
	W0908 12:36:51.081394  938712 pod_ready.go:104] pod "coredns-66bc5c9577-nd9km" is not "Ready", error: <nil>
	I0908 12:36:51.586425  942848 start.go:296] duration metric: took 156.023527ms for postStartSetup
	I0908 12:36:51.586525  942848 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 12:36:51.586571  942848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-039958
	I0908 12:36:51.613258  942848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/default-k8s-diff-port-039958/id_rsa Username:docker}
	I0908 12:36:51.705344  942848 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0908 12:36:51.711811  942848 fix.go:56] duration metric: took 4.931000802s for fixHost
	I0908 12:36:51.711849  942848 start.go:83] releasing machines lock for "default-k8s-diff-port-039958", held for 4.931072765s
	I0908 12:36:51.711931  942848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-039958
	I0908 12:36:51.734101  942848 ssh_runner.go:195] Run: cat /version.json
	I0908 12:36:51.734183  942848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-039958
	I0908 12:36:51.734267  942848 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 12:36:51.734367  942848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-039958
	I0908 12:36:51.754850  942848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/default-k8s-diff-port-039958/id_rsa Username:docker}
	I0908 12:36:51.755853  942848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/default-k8s-diff-port-039958/id_rsa Username:docker}
	I0908 12:36:51.923783  942848 ssh_runner.go:195] Run: systemctl --version
	I0908 12:36:51.929275  942848 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0908 12:36:52.084547  942848 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0908 12:36:52.090132  942848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 12:36:52.101273  942848 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0908 12:36:52.101378  942848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 12:36:52.111707  942848 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0908 12:36:52.111743  942848 start.go:495] detecting cgroup driver to use...
	I0908 12:36:52.111782  942848 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0908 12:36:52.111825  942848 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 12:36:52.126947  942848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 12:36:52.140290  942848 docker.go:218] disabling cri-docker service (if available) ...
	I0908 12:36:52.140371  942848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 12:36:52.154876  942848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 12:36:52.168633  942848 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 12:36:52.273095  942848 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 12:36:52.357717  942848 docker.go:234] disabling docker service ...
	I0908 12:36:52.357806  942848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 12:36:52.372526  942848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 12:36:52.385814  942848 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 12:36:52.476450  942848 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 12:36:52.566747  942848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 12:36:52.581723  942848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 12:36:52.605430  942848 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0908 12:36:52.605564  942848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:36:52.619096  942848 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0908 12:36:52.619198  942848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:36:52.632585  942848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:36:52.646076  942848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:36:52.658574  942848 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 12:36:52.668753  942848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:36:52.680494  942848 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:36:52.693152  942848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:36:52.705737  942848 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 12:36:52.715688  942848 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 12:36:52.725850  942848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:36:52.815349  942848 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0908 12:36:53.835349  942848 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.019888602s)
	I0908 12:36:53.835376  942848 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0908 12:36:53.835423  942848 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0908 12:36:53.839640  942848 start.go:563] Will wait 60s for crictl version
	I0908 12:36:53.839788  942848 ssh_runner.go:195] Run: which crictl
	I0908 12:36:53.844312  942848 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 12:36:53.880145  942848 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0908 12:36:53.880265  942848 ssh_runner.go:195] Run: crio --version
	I0908 12:36:53.927894  942848 ssh_runner.go:195] Run: crio --version
	I0908 12:36:53.977239  942848 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0908 12:36:52.577096  938712 pod_ready.go:94] pod "coredns-66bc5c9577-nd9km" is "Ready"
	I0908 12:36:52.577136  938712 pod_ready.go:86] duration metric: took 36.037680544s for pod "coredns-66bc5c9577-nd9km" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:36:52.582158  938712 pod_ready.go:83] waiting for pod "etcd-no-preload-997730" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:36:52.588939  938712 pod_ready.go:94] pod "etcd-no-preload-997730" is "Ready"
	I0908 12:36:52.588976  938712 pod_ready.go:86] duration metric: took 6.784149ms for pod "etcd-no-preload-997730" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:36:52.591480  938712 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-997730" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:36:52.598103  938712 pod_ready.go:94] pod "kube-apiserver-no-preload-997730" is "Ready"
	I0908 12:36:52.598137  938712 pod_ready.go:86] duration metric: took 6.627132ms for pod "kube-apiserver-no-preload-997730" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:36:52.601886  938712 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-997730" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:36:52.743487  938712 pod_ready.go:94] pod "kube-controller-manager-no-preload-997730" is "Ready"
	I0908 12:36:52.743515  938712 pod_ready.go:86] duration metric: took 141.597757ms for pod "kube-controller-manager-no-preload-997730" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:36:52.944115  938712 pod_ready.go:83] waiting for pod "kube-proxy-wqscj" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:36:53.342976  938712 pod_ready.go:94] pod "kube-proxy-wqscj" is "Ready"
	I0908 12:36:53.343007  938712 pod_ready.go:86] duration metric: took 398.863544ms for pod "kube-proxy-wqscj" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:36:53.543367  938712 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-997730" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:36:53.943688  938712 pod_ready.go:94] pod "kube-scheduler-no-preload-997730" is "Ready"
	I0908 12:36:53.943731  938712 pod_ready.go:86] duration metric: took 400.331351ms for pod "kube-scheduler-no-preload-997730" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:36:53.943745  938712 pod_ready.go:40] duration metric: took 37.408844643s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 12:36:54.001636  938712 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0908 12:36:54.003368  938712 out.go:179] * Done! kubectl is now configured to use "no-preload-997730" cluster and "default" namespace by default
	I0908 12:36:53.980801  942848 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-039958 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 12:36:54.005208  942848 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0908 12:36:54.009589  942848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 12:36:54.022563  942848 kubeadm.go:875] updating cluster {Name:default-k8s-diff-port-039958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-039958 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 12:36:54.022720  942848 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 12:36:54.022776  942848 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 12:36:54.076190  942848 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 12:36:54.076225  942848 crio.go:433] Images already preloaded, skipping extraction
	I0908 12:36:54.076295  942848 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 12:36:54.118904  942848 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 12:36:54.118932  942848 cache_images.go:85] Images are preloaded, skipping loading
	I0908 12:36:54.118943  942848 kubeadm.go:926] updating node { 192.168.103.2 8444 v1.34.0 crio true true} ...
	I0908 12:36:54.119083  942848 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-039958 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-039958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 12:36:54.119170  942848 ssh_runner.go:195] Run: crio config
	I0908 12:36:54.171743  942848 cni.go:84] Creating CNI manager for ""
	I0908 12:36:54.171768  942848 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 12:36:54.171782  942848 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 12:36:54.171813  942848 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-039958 NodeName:default-k8s-diff-port-039958 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 12:36:54.171991  942848 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-039958"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 12:36:54.172070  942848 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 12:36:54.182142  942848 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 12:36:54.182220  942848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 12:36:54.192725  942848 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0908 12:36:54.214079  942848 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 12:36:54.234494  942848 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I0908 12:36:54.255523  942848 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0908 12:36:54.260549  942848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 12:36:54.274598  942848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:36:54.363767  942848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 12:36:54.380309  942848 certs.go:68] Setting up /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/default-k8s-diff-port-039958 for IP: 192.168.103.2
	I0908 12:36:54.380327  942848 certs.go:194] generating shared ca certs ...
	I0908 12:36:54.380345  942848 certs.go:226] acquiring lock for ca certs: {Name:mkfa58237bde04d2b800b1fdade18ddbc226533f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:36:54.380497  942848 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21512-614854/.minikube/ca.key
	I0908 12:36:54.380536  942848 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21512-614854/.minikube/proxy-client-ca.key
	I0908 12:36:54.380543  942848 certs.go:256] generating profile certs ...
	I0908 12:36:54.380626  942848 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/default-k8s-diff-port-039958/client.key
	I0908 12:36:54.380670  942848 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/default-k8s-diff-port-039958/apiserver.key.b7da9e12
	I0908 12:36:54.380700  942848 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/default-k8s-diff-port-039958/proxy-client.key
	I0908 12:36:54.380808  942848 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/618620.pem (1338 bytes)
	W0908 12:36:54.380832  942848 certs.go:480] ignoring /home/jenkins/minikube-integration/21512-614854/.minikube/certs/618620_empty.pem, impossibly tiny 0 bytes
	I0908 12:36:54.380839  942848 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca-key.pem (1675 bytes)
	I0908 12:36:54.380860  942848 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem (1082 bytes)
	I0908 12:36:54.380878  942848 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/cert.pem (1123 bytes)
	I0908 12:36:54.380900  942848 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/key.pem (1675 bytes)
	I0908 12:36:54.380952  942848 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/files/etc/ssl/certs/6186202.pem (1708 bytes)
	I0908 12:36:54.381854  942848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 12:36:54.413826  942848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0908 12:36:54.444441  942848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 12:36:54.499191  942848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0908 12:36:54.595250  942848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/default-k8s-diff-port-039958/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0908 12:36:54.624909  942848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/default-k8s-diff-port-039958/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0908 12:36:54.652144  942848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/default-k8s-diff-port-039958/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 12:36:54.679150  942848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/default-k8s-diff-port-039958/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0908 12:36:54.706419  942848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/certs/618620.pem --> /usr/share/ca-certificates/618620.pem (1338 bytes)
	I0908 12:36:54.733331  942848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/files/etc/ssl/certs/6186202.pem --> /usr/share/ca-certificates/6186202.pem (1708 bytes)
	I0908 12:36:54.759761  942848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 12:36:54.786705  942848 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 12:36:54.808171  942848 ssh_runner.go:195] Run: openssl version
	I0908 12:36:54.814430  942848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 12:36:54.826103  942848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:36:54.830371  942848 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 11:33 /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:36:54.830445  942848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:36:54.838010  942848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 12:36:54.848205  942848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/618620.pem && ln -fs /usr/share/ca-certificates/618620.pem /etc/ssl/certs/618620.pem"
	I0908 12:36:54.859075  942848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/618620.pem
	I0908 12:36:54.863257  942848 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 11:43 /usr/share/ca-certificates/618620.pem
	I0908 12:36:54.863336  942848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/618620.pem
	I0908 12:36:54.871793  942848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/618620.pem /etc/ssl/certs/51391683.0"
	I0908 12:36:54.882122  942848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6186202.pem && ln -fs /usr/share/ca-certificates/6186202.pem /etc/ssl/certs/6186202.pem"
	I0908 12:36:54.894077  942848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6186202.pem
	I0908 12:36:54.898061  942848 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 11:43 /usr/share/ca-certificates/6186202.pem
	I0908 12:36:54.898134  942848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6186202.pem
	I0908 12:36:54.907305  942848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6186202.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 12:36:54.919955  942848 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 12:36:54.924868  942848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0908 12:36:54.932535  942848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0908 12:36:54.940947  942848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0908 12:36:54.949980  942848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0908 12:36:54.958562  942848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0908 12:36:54.967065  942848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0908 12:36:54.980901  942848 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-039958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-039958 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:36:54.981020  942848 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0908 12:36:54.981071  942848 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 12:36:55.092299  942848 cri.go:89] found id: ""
	I0908 12:36:55.092362  942848 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 12:36:55.105002  942848 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0908 12:36:55.105028  942848 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0908 12:36:55.105086  942848 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0908 12:36:55.180113  942848 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0908 12:36:55.181205  942848 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-039958" does not appear in /home/jenkins/minikube-integration/21512-614854/kubeconfig
	I0908 12:36:55.181925  942848 kubeconfig.go:62] /home/jenkins/minikube-integration/21512-614854/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-039958" cluster setting kubeconfig missing "default-k8s-diff-port-039958" context setting]
	I0908 12:36:55.182972  942848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/kubeconfig: {Name:mkdc8e576f612d76ffebaac15a9174cc9dea2917 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:36:55.184794  942848 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0908 12:36:55.203380  942848 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.103.2
	I0908 12:36:55.203435  942848 kubeadm.go:593] duration metric: took 98.400373ms to restartPrimaryControlPlane
	I0908 12:36:55.203451  942848 kubeadm.go:394] duration metric: took 222.56119ms to StartCluster
	I0908 12:36:55.203480  942848 settings.go:142] acquiring lock: {Name:mk9e6c1dbd6be0735fc1b1285e1ee30836d4ceb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:36:55.203583  942848 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21512-614854/kubeconfig
	I0908 12:36:55.205699  942848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/kubeconfig: {Name:mkdc8e576f612d76ffebaac15a9174cc9dea2917 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:36:55.206063  942848 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 12:36:55.206341  942848 config.go:182] Loaded profile config "default-k8s-diff-port-039958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 12:36:55.206406  942848 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0908 12:36:55.206498  942848 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-039958"
	I0908 12:36:55.206517  942848 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-039958"
	W0908 12:36:55.206526  942848 addons.go:247] addon storage-provisioner should already be in state true
	I0908 12:36:55.206558  942848 host.go:66] Checking if "default-k8s-diff-port-039958" exists ...
	I0908 12:36:55.207111  942848 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-039958 --format={{.State.Status}}
	I0908 12:36:55.207198  942848 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-039958"
	I0908 12:36:55.207239  942848 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-039958"
	I0908 12:36:55.207501  942848 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-039958"
	I0908 12:36:55.207521  942848 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-039958"
	W0908 12:36:55.207529  942848 addons.go:247] addon dashboard should already be in state true
	I0908 12:36:55.207568  942848 host.go:66] Checking if "default-k8s-diff-port-039958" exists ...
	I0908 12:36:55.207608  942848 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-039958 --format={{.State.Status}}
	I0908 12:36:55.207859  942848 out.go:179] * Verifying Kubernetes components...
	I0908 12:36:55.208037  942848 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-039958 --format={{.State.Status}}
	I0908 12:36:55.208345  942848 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-039958"
	I0908 12:36:55.208367  942848 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-039958"
	W0908 12:36:55.208375  942848 addons.go:247] addon metrics-server should already be in state true
	I0908 12:36:55.208414  942848 host.go:66] Checking if "default-k8s-diff-port-039958" exists ...
	I0908 12:36:55.208857  942848 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-039958 --format={{.State.Status}}
	I0908 12:36:55.209878  942848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:36:55.234038  942848 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-039958"
	W0908 12:36:55.234074  942848 addons.go:247] addon default-storageclass should already be in state true
	I0908 12:36:55.234113  942848 host.go:66] Checking if "default-k8s-diff-port-039958" exists ...
	I0908 12:36:55.234616  942848 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-039958 --format={{.State.Status}}
	I0908 12:36:55.234716  942848 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0908 12:36:55.235893  942848 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0908 12:36:55.235919  942848 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0908 12:36:55.235988  942848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-039958
	I0908 12:36:55.237895  942848 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 12:36:55.239291  942848 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 12:36:55.239317  942848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 12:36:55.239376  942848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-039958
	I0908 12:36:55.241236  942848 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0908 12:36:55.242448  942848 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0908 12:36:55.243535  942848 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0908 12:36:55.243556  942848 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0908 12:36:55.243627  942848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-039958
	I0908 12:36:55.259213  942848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/default-k8s-diff-port-039958/id_rsa Username:docker}
	I0908 12:36:55.261274  942848 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 12:36:55.261304  942848 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 12:36:55.261388  942848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-039958
	I0908 12:36:55.265130  942848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/default-k8s-diff-port-039958/id_rsa Username:docker}
	I0908 12:36:55.265428  942848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/default-k8s-diff-port-039958/id_rsa Username:docker}
	I0908 12:36:55.287889  942848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/default-k8s-diff-port-039958/id_rsa Username:docker}
	I0908 12:36:55.506429  942848 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0908 12:36:55.506482  942848 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0908 12:36:55.507123  942848 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0908 12:36:55.507149  942848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0908 12:36:55.676343  942848 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0908 12:36:55.676443  942848 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0908 12:36:55.679168  942848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 12:36:55.684795  942848 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0908 12:36:55.684827  942848 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0908 12:36:55.699825  942848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 12:36:55.778296  942848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 12:36:55.778917  942848 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 12:36:55.778944  942848 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0908 12:36:55.783526  942848 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0908 12:36:55.783560  942848 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0908 12:36:55.884019  942848 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0908 12:36:55.884050  942848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0908 12:36:55.885352  942848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 12:36:55.993916  942848 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0908 12:36:55.993953  942848 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0908 12:36:56.092995  942848 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0908 12:36:56.093029  942848 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0908 12:36:56.189962  942848 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0908 12:36:56.190002  942848 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0908 12:36:56.213231  942848 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0908 12:36:56.213277  942848 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0908 12:36:56.298377  942848 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0908 12:36:56.298412  942848 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0908 12:36:56.321438  942848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0908 12:36:53.758326  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:36:56.257789  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	I0908 12:37:01.113673  942848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.434453923s)
	I0908 12:37:01.113770  942848 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.413909704s)
	I0908 12:37:01.113807  942848 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-039958" to be "Ready" ...
	I0908 12:37:01.114220  942848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.335889805s)
	I0908 12:37:01.179947  942848 node_ready.go:49] node "default-k8s-diff-port-039958" is "Ready"
	I0908 12:37:01.179986  942848 node_ready.go:38] duration metric: took 66.160185ms for node "default-k8s-diff-port-039958" to be "Ready" ...
	I0908 12:37:01.180005  942848 api_server.go:52] waiting for apiserver process to appear ...
	I0908 12:37:01.180076  942848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 12:37:01.188491  942848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.303097359s)
	I0908 12:37:01.188538  942848 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-039958"
	I0908 12:37:01.188647  942848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.867162606s)
	I0908 12:37:01.190470  942848 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-039958 addons enable metrics-server
	
	I0908 12:37:01.192011  942848 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0908 12:37:01.193234  942848 addons.go:514] duration metric: took 5.986829567s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0908 12:37:01.196437  942848 api_server.go:72] duration metric: took 5.990326761s to wait for apiserver process to appear ...
	I0908 12:37:01.196458  942848 api_server.go:88] waiting for apiserver healthz status ...
	I0908 12:37:01.196476  942848 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I0908 12:37:01.201894  942848 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 12:37:01.201920  942848 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 12:36:58.258533  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:00.758093  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	I0908 12:37:01.696590  942848 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I0908 12:37:01.702086  942848 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 12:37:01.702131  942848 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 12:37:02.196683  942848 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I0908 12:37:02.203013  942848 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I0908 12:37:02.204330  942848 api_server.go:141] control plane version: v1.34.0
	I0908 12:37:02.204361  942848 api_server.go:131] duration metric: took 1.007896936s to wait for apiserver health ...
	I0908 12:37:02.204370  942848 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 12:37:02.208721  942848 system_pods.go:59] 9 kube-system pods found
	I0908 12:37:02.208782  942848 system_pods.go:61] "coredns-66bc5c9577-gb4rh" [6a5ce944-87ea-43fc-8e1b-6b7c6602d782] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 12:37:02.208795  942848 system_pods.go:61] "etcd-default-k8s-diff-port-039958" [a8523098-eac8-46a4-9c22-2a2bc216a18c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 12:37:02.208804  942848 system_pods.go:61] "kindnet-89lwp" [7f14a0af-69dc-410a-a7de-fd608eb510b7] Running
	I0908 12:37:02.208812  942848 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-039958" [94dee560-f2b4-4c2b-beb8-9f3b0042b381] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 12:37:02.208819  942848 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-039958" [570eff95-5de5-4d44-8c20-2d50d16c0d96] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 12:37:02.208831  942848 system_pods.go:61] "kube-proxy-cgrs8" [a898f47e-1688-4fb8-8f23-f1a05d5a1f33] Running
	I0908 12:37:02.208836  942848 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-039958" [d958fbef-3165-4429-b5a0-305a0ff21dc5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 12:37:02.208841  942848 system_pods.go:61] "metrics-server-746fcd58dc-hvqdm" [d648640c-2cab-4575-8290-51c39f0a19b3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 12:37:02.208844  942848 system_pods.go:61] "storage-provisioner" [89c03a4b-2580-4ee8-a683-d1523dd99fef] Running
	I0908 12:37:02.208850  942848 system_pods.go:74] duration metric: took 4.474582ms to wait for pod list to return data ...
	I0908 12:37:02.208861  942848 default_sa.go:34] waiting for default service account to be created ...
	I0908 12:37:02.211700  942848 default_sa.go:45] found service account: "default"
	I0908 12:37:02.211729  942848 default_sa.go:55] duration metric: took 2.854101ms for default service account to be created ...
	I0908 12:37:02.211739  942848 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 12:37:02.215028  942848 system_pods.go:86] 9 kube-system pods found
	I0908 12:37:02.215070  942848 system_pods.go:89] "coredns-66bc5c9577-gb4rh" [6a5ce944-87ea-43fc-8e1b-6b7c6602d782] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 12:37:02.215078  942848 system_pods.go:89] "etcd-default-k8s-diff-port-039958" [a8523098-eac8-46a4-9c22-2a2bc216a18c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 12:37:02.215083  942848 system_pods.go:89] "kindnet-89lwp" [7f14a0af-69dc-410a-a7de-fd608eb510b7] Running
	I0908 12:37:02.215088  942848 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-039958" [94dee560-f2b4-4c2b-beb8-9f3b0042b381] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 12:37:02.215095  942848 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-039958" [570eff95-5de5-4d44-8c20-2d50d16c0d96] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 12:37:02.215099  942848 system_pods.go:89] "kube-proxy-cgrs8" [a898f47e-1688-4fb8-8f23-f1a05d5a1f33] Running
	I0908 12:37:02.215105  942848 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-039958" [d958fbef-3165-4429-b5a0-305a0ff21dc5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 12:37:02.215109  942848 system_pods.go:89] "metrics-server-746fcd58dc-hvqdm" [d648640c-2cab-4575-8290-51c39f0a19b3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 12:37:02.215119  942848 system_pods.go:89] "storage-provisioner" [89c03a4b-2580-4ee8-a683-d1523dd99fef] Running
	I0908 12:37:02.215127  942848 system_pods.go:126] duration metric: took 3.381403ms to wait for k8s-apps to be running ...
	I0908 12:37:02.215134  942848 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 12:37:02.215182  942848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 12:37:02.228839  942848 system_svc.go:56] duration metric: took 13.689257ms WaitForService to wait for kubelet
	I0908 12:37:02.228878  942848 kubeadm.go:578] duration metric: took 7.022770217s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 12:37:02.228905  942848 node_conditions.go:102] verifying NodePressure condition ...
	I0908 12:37:02.232419  942848 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0908 12:37:02.232451  942848 node_conditions.go:123] node cpu capacity is 8
	I0908 12:37:02.232465  942848 node_conditions.go:105] duration metric: took 3.554674ms to run NodePressure ...
	I0908 12:37:02.232479  942848 start.go:241] waiting for startup goroutines ...
	I0908 12:37:02.232487  942848 start.go:246] waiting for cluster config update ...
	I0908 12:37:02.232498  942848 start.go:255] writing updated cluster config ...
	I0908 12:37:02.232770  942848 ssh_runner.go:195] Run: rm -f paused
	I0908 12:37:02.236948  942848 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 12:37:02.241091  942848 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gb4rh" in "kube-system" namespace to be "Ready" or be gone ...
	W0908 12:37:04.247344  942848 pod_ready.go:104] pod "coredns-66bc5c9577-gb4rh" is not "Ready", error: <nil>
	W0908 12:37:06.247957  942848 pod_ready.go:104] pod "coredns-66bc5c9577-gb4rh" is not "Ready", error: <nil>
	W0908 12:37:03.257787  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:05.757720  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:08.748018  942848 pod_ready.go:104] pod "coredns-66bc5c9577-gb4rh" is not "Ready", error: <nil>
	W0908 12:37:11.247224  942848 pod_ready.go:104] pod "coredns-66bc5c9577-gb4rh" is not "Ready", error: <nil>
	W0908 12:37:08.257784  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:10.757968  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:13.747206  942848 pod_ready.go:104] pod "coredns-66bc5c9577-gb4rh" is not "Ready", error: <nil>
	W0908 12:37:15.748096  942848 pod_ready.go:104] pod "coredns-66bc5c9577-gb4rh" is not "Ready", error: <nil>
	W0908 12:37:13.258116  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:15.757477  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:18.247360  942848 pod_ready.go:104] pod "coredns-66bc5c9577-gb4rh" is not "Ready", error: <nil>
	W0908 12:37:20.247841  942848 pod_ready.go:104] pod "coredns-66bc5c9577-gb4rh" is not "Ready", error: <nil>
	W0908 12:37:17.757872  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:19.757985  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:22.747356  942848 pod_ready.go:104] pod "coredns-66bc5c9577-gb4rh" is not "Ready", error: <nil>
	W0908 12:37:25.247866  942848 pod_ready.go:104] pod "coredns-66bc5c9577-gb4rh" is not "Ready", error: <nil>
	W0908 12:37:22.258365  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:24.757273  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:27.747272  942848 pod_ready.go:104] pod "coredns-66bc5c9577-gb4rh" is not "Ready", error: <nil>
	W0908 12:37:29.747903  942848 pod_ready.go:104] pod "coredns-66bc5c9577-gb4rh" is not "Ready", error: <nil>
	W0908 12:37:27.257759  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:29.758603  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:32.246724  942848 pod_ready.go:104] pod "coredns-66bc5c9577-gb4rh" is not "Ready", error: <nil>
	I0908 12:37:32.746600  942848 pod_ready.go:94] pod "coredns-66bc5c9577-gb4rh" is "Ready"
	I0908 12:37:32.746633  942848 pod_ready.go:86] duration metric: took 30.50551235s for pod "coredns-66bc5c9577-gb4rh" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:37:32.749803  942848 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-039958" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:37:32.754452  942848 pod_ready.go:94] pod "etcd-default-k8s-diff-port-039958" is "Ready"
	I0908 12:37:32.754481  942848 pod_ready.go:86] duration metric: took 4.650443ms for pod "etcd-default-k8s-diff-port-039958" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:37:32.757100  942848 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-039958" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:37:32.761953  942848 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-039958" is "Ready"
	I0908 12:37:32.761985  942848 pod_ready.go:86] duration metric: took 4.849995ms for pod "kube-apiserver-default-k8s-diff-port-039958" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:37:32.764191  942848 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-039958" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:37:32.945383  942848 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-039958" is "Ready"
	I0908 12:37:32.945422  942848 pod_ready.go:86] duration metric: took 181.203994ms for pod "kube-controller-manager-default-k8s-diff-port-039958" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:37:33.145058  942848 pod_ready.go:83] waiting for pod "kube-proxy-cgrs8" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:37:33.544898  942848 pod_ready.go:94] pod "kube-proxy-cgrs8" is "Ready"
	I0908 12:37:33.544927  942848 pod_ready.go:86] duration metric: took 399.833177ms for pod "kube-proxy-cgrs8" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:37:33.745634  942848 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-039958" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:37:34.144919  942848 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-039958" is "Ready"
	I0908 12:37:34.144965  942848 pod_ready.go:86] duration metric: took 399.29663ms for pod "kube-scheduler-default-k8s-diff-port-039958" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:37:34.144988  942848 pod_ready.go:40] duration metric: took 31.907998549s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 12:37:34.196309  942848 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0908 12:37:34.198553  942848 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-039958" cluster and "default" namespace by default
	W0908 12:37:32.257319  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:34.258404  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:36.757652  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:38.758275  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:41.257525  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:43.757901  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:45.758150  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:48.257273  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:50.257639  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:52.757594  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:54.758061  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:56.758378  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:59.258066  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:01.757513  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:03.758132  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:06.258116  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:08.757359  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:11.257772  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:13.258266  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:15.758076  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:18.258221  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:20.757456  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:22.757615  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:24.757944  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:27.257481  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:29.257676  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:31.757922  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:33.757998  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:35.758189  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:38.257284  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:40.258186  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:42.757563  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:44.758049  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:47.258155  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:49.758499  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:52.257549  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:54.257641  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:56.257796  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:58.758359  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:01.257901  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:03.757752  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:06.257817  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:08.258296  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:10.757713  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:13.258258  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:15.757976  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:18.257584  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:20.257682  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:22.758060  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:25.257404  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:27.257971  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:29.757975  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:32.257556  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:34.257819  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:36.757633  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:38.757871  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:41.257638  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:43.257970  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:45.757733  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:47.758232  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:49.758583  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:52.257803  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:54.257902  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:56.758212  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:59.257321  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:01.257592  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:03.757620  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:06.257707  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:08.757824  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:11.257105  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:13.257921  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:15.258039  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:17.758096  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:20.258070  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:22.757269  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:24.757608  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:27.257916  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:29.758141  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:32.257932  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:34.758358  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:37.257458  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:39.257731  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:41.758247  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:44.257810  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:46.258011  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:48.758378  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:51.257347  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:53.757974  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:56.258386  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:58.757745  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:00.758360  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:03.257917  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:05.758305  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:08.257694  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:10.757411  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:12.757802  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:15.258051  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:17.258437  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:19.758059  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:22.257165  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:24.257861  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:26.758229  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:29.257287  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:31.257940  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:33.757609  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:36.257193  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:38.257338  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:40.259086  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:42.757325  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:44.757506  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:46.757651  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:48.758048  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:51.257798  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:53.757260  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:55.758043  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:58.257673  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:00.757447  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:03.258213  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:05.758038  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:08.257935  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:10.757253  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:12.757315  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:14.758076  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:17.257358  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:19.257904  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:21.757955  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:23.758139  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:26.258024  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:28.758804  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:31.257119  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:33.257824  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:35.257908  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:37.757486  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:39.757547  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:41.757854  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:44.258038  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:46.258403  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:48.758374  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:51.257068  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:53.258291  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:55.757944  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:58.258146  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:00.258571  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:02.758004  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:05.257358  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:07.257469  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:09.258160  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:11.758090  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:14.257557  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:16.257748  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:18.258516  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:20.757930  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:23.257512  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:25.258146  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:27.757352  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:29.757963  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:32.257634  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:34.258269  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:36.758040  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:39.257138  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:41.257975  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:43.757450  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:45.758009  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:48.258119  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:50.757728  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:52.758476  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:55.258004  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:57.758245  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:00.257652  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:02.257901  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:04.758074  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:06.758528  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:09.257856  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:11.757537  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:13.758186  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:16.257671  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:18.257951  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:20.757717  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:22.758111  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:25.258291  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:27.758147  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:30.257141  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:32.257828  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:34.258099  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:36.757903  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:39.257811  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:41.757835  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:43.757896  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:46.257631  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:48.757919  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:51.258326  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:53.757689  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:56.257769  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:58.758237  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:01.257880  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:03.258125  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:05.758255  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:08.257563  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:10.758121  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:12.758503  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:15.257621  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:17.258405  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:19.758075  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:22.257402  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:24.258425  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:26.757955  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:28.758271  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:31.257461  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:33.258189  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:35.757165  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:37.757806  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:40.257927  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:42.258011  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:44.258220  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:46.758295  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:49.258312  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:51.758196  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Sep 08 12:44:23 no-preload-997730 crio[681]: time="2025-09-08 12:44:23.686554291Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=bd64f1af-f5b0-423f-bc7d-ae28275117b7 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:44:28 no-preload-997730 crio[681]: time="2025-09-08 12:44:28.686201454Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=d27179bf-fe27-424a-812c-dae169903830 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:44:28 no-preload-997730 crio[681]: time="2025-09-08 12:44:28.686508661Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=d27179bf-fe27-424a-812c-dae169903830 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:44:38 no-preload-997730 crio[681]: time="2025-09-08 12:44:38.686373617Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=f6d69f2e-0c97-4fe9-a415-cb29bd2be6c3 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:44:38 no-preload-997730 crio[681]: time="2025-09-08 12:44:38.686650318Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=f6d69f2e-0c97-4fe9-a415-cb29bd2be6c3 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:44:42 no-preload-997730 crio[681]: time="2025-09-08 12:44:42.685579395Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=6ae79ccd-c168-4cc3-bcc8-db5a0ee2cf5d name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:44:42 no-preload-997730 crio[681]: time="2025-09-08 12:44:42.685930001Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=6ae79ccd-c168-4cc3-bcc8-db5a0ee2cf5d name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:44:42 no-preload-997730 crio[681]: time="2025-09-08 12:44:42.686609875Z" level=info msg="Pulling image: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=acae1586-d639-41b7-9408-a82416c1f7d1 name=/runtime.v1.ImageService/PullImage
	Sep 08 12:44:42 no-preload-997730 crio[681]: time="2025-09-08 12:44:42.687934573Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 08 12:44:53 no-preload-997730 crio[681]: time="2025-09-08 12:44:53.685411913Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=3c57a7ae-96f9-48d6-b61b-4b29878478ad name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:44:53 no-preload-997730 crio[681]: time="2025-09-08 12:44:53.685623048Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=3c57a7ae-96f9-48d6-b61b-4b29878478ad name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:45:07 no-preload-997730 crio[681]: time="2025-09-08 12:45:07.686117948Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=75f26ea8-6014-4bb3-8338-a01ee7eb9c7c name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:45:07 no-preload-997730 crio[681]: time="2025-09-08 12:45:07.686422507Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=75f26ea8-6014-4bb3-8338-a01ee7eb9c7c name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:45:20 no-preload-997730 crio[681]: time="2025-09-08 12:45:20.685528592Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=0ae59c71-7e25-49b3-91fd-b14d5a9676e9 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:45:20 no-preload-997730 crio[681]: time="2025-09-08 12:45:20.685816322Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=0ae59c71-7e25-49b3-91fd-b14d5a9676e9 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:45:23 no-preload-997730 crio[681]: time="2025-09-08 12:45:23.685314308Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=7e067395-9735-4e5c-b8a5-7bf34ee2319b name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:45:23 no-preload-997730 crio[681]: time="2025-09-08 12:45:23.685701404Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=7e067395-9735-4e5c-b8a5-7bf34ee2319b name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:45:32 no-preload-997730 crio[681]: time="2025-09-08 12:45:32.685553639Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=159697a2-1dda-484d-a12e-2307895458c7 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:45:32 no-preload-997730 crio[681]: time="2025-09-08 12:45:32.685828767Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=159697a2-1dda-484d-a12e-2307895458c7 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:45:37 no-preload-997730 crio[681]: time="2025-09-08 12:45:37.685884871Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=02bce75d-4b03-4f95-a6fb-88ac8ea91a24 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:45:37 no-preload-997730 crio[681]: time="2025-09-08 12:45:37.686256453Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=02bce75d-4b03-4f95-a6fb-88ac8ea91a24 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:45:46 no-preload-997730 crio[681]: time="2025-09-08 12:45:46.685987983Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=f2b56aae-9b25-47c2-a6bf-372f7bd0753b name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:45:46 no-preload-997730 crio[681]: time="2025-09-08 12:45:46.686365581Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=f2b56aae-9b25-47c2-a6bf-372f7bd0753b name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:45:52 no-preload-997730 crio[681]: time="2025-09-08 12:45:52.688124800Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=080ed70d-346c-413c-9529-fabecfa8f57a name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:45:52 no-preload-997730 crio[681]: time="2025-09-08 12:45:52.688427769Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=080ed70d-346c-413c-9529-fabecfa8f57a name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	116cdbcd09687       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   2 minutes ago       Exited              dashboard-metrics-scraper   6                   9b4a649594034       dashboard-metrics-scraper-6ffb444bf9-c5f6j
	4b88512f9e94b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner         2                   992ea9eb9f38c       storage-provisioner
	9b7598014ed3c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   9 minutes ago       Running             coredns                     1                   bd14f8125a980       coredns-66bc5c9577-nd9km
	7d5304b0662ac       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   9 minutes ago       Running             kindnet-cni                 1                   ae1d5245ed943       kindnet-rm2cd
	dc14c810d71cb       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c   9 minutes ago       Running             busybox                     1                   6ee42b98d10aa       busybox
	c7df65c482a8f       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   9 minutes ago       Running             kube-proxy                  1                   b156bc3007a0f       kube-proxy-wqscj
	cf99178b116a8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Exited              storage-provisioner         1                   992ea9eb9f38c       storage-provisioner
	3e76448df1da8       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   9 minutes ago       Running             kube-controller-manager     1                   d771a760cefb3       kube-controller-manager-no-preload-997730
	81618c2be90c6       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   9 minutes ago       Running             kube-scheduler              1                   55f30126795dc       kube-scheduler-no-preload-997730
	0ba9ac2cbe3a9       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   9 minutes ago       Running             kube-apiserver              1                   a7e4455729718       kube-apiserver-no-preload-997730
	7758c72adbf53       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   9 minutes ago       Running             etcd                        1                   22fb0fadc4a8d       etcd-no-preload-997730
	
	
	==> coredns [9b7598014ed3cbf3509fb26017bbe743376f7422073001c375ef931c3ea55887] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46191 - 23439 "HINFO IN 4591962211074296713.5651574647663328576. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.093395135s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-997730
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-997730
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a399eb27affc71ce2737faeeac659fc2ce938c64
	                    minikube.k8s.io/name=no-preload-997730
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T12_35_17_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 12:35:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-997730
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 12:45:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 12:43:12 +0000   Mon, 08 Sep 2025 12:35:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 12:43:12 +0000   Mon, 08 Sep 2025 12:35:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 12:43:12 +0000   Mon, 08 Sep 2025 12:35:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 12:43:12 +0000   Mon, 08 Sep 2025 12:35:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-997730
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	System Info:
	  Machine ID:                 08457c5968f44b66854208e727a11fe6
	  System UUID:                002053e4-2f46-4bc1-878b-646a0ed65720
	  Boot ID:                    1bb31c1a-3b78-4ea1-9977-d0689f279875
	  Kernel Version:             5.15.0-1083-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-nd9km                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     10m
	  kube-system                 etcd-no-preload-997730                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kindnet-rm2cd                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-no-preload-997730              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-no-preload-997730     200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-wqscj                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-no-preload-997730              100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-746fcd58dc-c8jxj               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         10m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-c5f6j    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m38s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-vwd7n         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   Starting                 9m40s                  kube-proxy       
	  Warning  CgroupV1                 10m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node no-preload-997730 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node no-preload-997730 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)      kubelet          Node no-preload-997730 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  10m                    kubelet          Node no-preload-997730 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 10m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    10m                    kubelet          Node no-preload-997730 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m                    kubelet          Node no-preload-997730 status is now: NodeHasSufficientPID
	  Normal   Starting                 10m                    kubelet          Starting kubelet.
	  Normal   RegisteredNode           10m                    node-controller  Node no-preload-997730 event: Registered Node no-preload-997730 in Controller
	  Normal   NodeReady                10m                    kubelet          Node no-preload-997730 status is now: NodeReady
	  Normal   Starting                 9m48s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m48s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m48s (x8 over 9m48s)  kubelet          Node no-preload-997730 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m48s (x8 over 9m48s)  kubelet          Node no-preload-997730 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m48s (x8 over 9m48s)  kubelet          Node no-preload-997730 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m38s                  node-controller  Node no-preload-997730 event: Registered Node no-preload-997730 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: fa 3c ce 84 95 92 8a f7 86 36 20 87 08 00
	[  +0.000689] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4fa905f06d1f
	[  +0.000005] ll header: 00000000: fa 3c ce 84 95 92 8a f7 86 36 20 87 08 00
	[Sep 8 12:37] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4fa905f06d1f
	[  +0.000008] ll header: 00000000: fa 3c ce 84 95 92 8a f7 86 36 20 87 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4fa905f06d1f
	[  +0.000002] ll header: 00000000: fa 3c ce 84 95 92 8a f7 86 36 20 87 08 00
	[  +2.015830] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4fa905f06d1f
	[  +0.000007] ll header: 00000000: fa 3c ce 84 95 92 8a f7 86 36 20 87 08 00
	[  +0.004384] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4fa905f06d1f
	[  +0.000005] ll header: 00000000: fa 3c ce 84 95 92 8a f7 86 36 20 87 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4fa905f06d1f
	[  +0.000002] ll header: 00000000: fa 3c ce 84 95 92 8a f7 86 36 20 87 08 00
	[  +4.123250] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4fa905f06d1f
	[  +0.000008] ll header: 00000000: fa 3c ce 84 95 92 8a f7 86 36 20 87 08 00
	[  +0.003960] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4fa905f06d1f
	[  +0.000006] ll header: 00000000: fa 3c ce 84 95 92 8a f7 86 36 20 87 08 00
	[  +0.000008] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4fa905f06d1f
	[  +0.000002] ll header: 00000000: fa 3c ce 84 95 92 8a f7 86 36 20 87 08 00
	[  +8.187331] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4fa905f06d1f
	[  +0.000006] ll header: 00000000: fa 3c ce 84 95 92 8a f7 86 36 20 87 08 00
	[  +0.003987] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4fa905f06d1f
	[  +0.000005] ll header: 00000000: fa 3c ce 84 95 92 8a f7 86 36 20 87 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4fa905f06d1f
	[  +0.000002] ll header: 00000000: fa 3c ce 84 95 92 8a f7 86 36 20 87 08 00
	
	
	==> etcd [7758c72adbf536f74cef6a4ad79725287c23eb8faa60606d60c46846663f4562] <==
	{"level":"warn","ts":"2025-09-08T12:36:11.980517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:11.994653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:12.004244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:12.012725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:12.027803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:12.036039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:12.084066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:12.091732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:12.100656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:12.123956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:12.130574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:12.137435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:12.144556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:12.151325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:12.183544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:12.191225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:12.198556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:12.205864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:12.212827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:12.220457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:12.228374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:12.262989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:12.275864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:12.282761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:12.332610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33676","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 12:45:56 up  3:28,  0 users,  load average: 0.26, 0.90, 1.61
	Linux no-preload-997730 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [7d5304b0662ac1fc51eea22f8771df6cd4abb3454c9dea4d4ca00b695c659936] <==
	I0908 12:43:55.478310       1 main.go:301] handling current node
	I0908 12:44:05.478013       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0908 12:44:05.478048       1 main.go:301] handling current node
	I0908 12:44:15.477991       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0908 12:44:15.478046       1 main.go:301] handling current node
	I0908 12:44:25.477769       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0908 12:44:25.477803       1 main.go:301] handling current node
	I0908 12:44:35.477978       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0908 12:44:35.478030       1 main.go:301] handling current node
	I0908 12:44:45.478017       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0908 12:44:45.478050       1 main.go:301] handling current node
	I0908 12:44:55.478005       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0908 12:44:55.478046       1 main.go:301] handling current node
	I0908 12:45:05.477627       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0908 12:45:05.477672       1 main.go:301] handling current node
	I0908 12:45:15.477880       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0908 12:45:15.477912       1 main.go:301] handling current node
	I0908 12:45:25.477642       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0908 12:45:25.477703       1 main.go:301] handling current node
	I0908 12:45:35.477518       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0908 12:45:35.477569       1 main.go:301] handling current node
	I0908 12:45:45.477987       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0908 12:45:45.478020       1 main.go:301] handling current node
	I0908 12:45:55.478313       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0908 12:45:55.478351       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0ba9ac2cbe3a9cb4b99cde9ed049902e9ce18e226d30a8ace56daec47cfdf923] <==
	I0908 12:42:11.775748       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0908 12:42:14.113964       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 12:42:14.114039       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0908 12:42:14.114058       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0908 12:42:14.114114       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 12:42:14.114153       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0908 12:42:14.115081       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0908 12:42:59.655683       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:43:24.725664       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0908 12:44:14.115033       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 12:44:14.115096       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0908 12:44:14.115115       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0908 12:44:14.115180       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 12:44:14.115255       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0908 12:44:14.117127       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0908 12:44:22.320967       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:44:29.406513       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:45:48.280162       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:45:51.263155       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [3e76448df1da821f94c9676400494d50c5d1a2bc66c1a739602e3c46bf44a9b9] <==
	I0908 12:39:48.488419       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:40:18.438271       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:40:18.495853       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:40:48.442693       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:40:48.504049       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:41:18.447450       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:41:18.511458       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:41:48.452222       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:41:48.519938       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:42:18.456560       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:42:18.528679       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:42:48.461726       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:42:48.536488       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:43:18.466491       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:43:18.543784       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:43:48.471416       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:43:48.551480       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:44:18.476246       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:44:18.559188       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:44:48.480351       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:44:48.566927       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:45:18.485066       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:45:18.575031       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:45:48.490433       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:45:48.582931       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [c7df65c482a8fad4642d74c3b7444627e0f118a4a4ea911a84ce57eea427c96a] <==
	I0908 12:36:15.283977       1 server_linux.go:53] "Using iptables proxy"
	I0908 12:36:15.517200       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 12:36:15.618033       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 12:36:15.618078       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E0908 12:36:15.618180       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 12:36:15.694394       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 12:36:15.694474       1 server_linux.go:132] "Using iptables Proxier"
	I0908 12:36:15.700109       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 12:36:15.700523       1 server.go:527] "Version info" version="v1.34.0"
	I0908 12:36:15.700603       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 12:36:15.702162       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 12:36:15.702183       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 12:36:15.702186       1 config.go:106] "Starting endpoint slice config controller"
	I0908 12:36:15.702218       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 12:36:15.702314       1 config.go:309] "Starting node config controller"
	I0908 12:36:15.702332       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 12:36:15.702339       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 12:36:15.702408       1 config.go:200] "Starting service config controller"
	I0908 12:36:15.702477       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 12:36:15.803226       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 12:36:15.803247       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0908 12:36:15.803227       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [81618c2be90c6613260ac7ead7b58db175be969f12d6dba8ce9913573920b8fc] <==
	I0908 12:36:11.315087       1 serving.go:386] Generated self-signed cert in-memory
	W0908 12:36:13.077215       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0908 12:36:13.077371       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0908 12:36:13.077436       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0908 12:36:13.077476       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0908 12:36:13.283180       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0908 12:36:13.283327       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 12:36:13.290400       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 12:36:13.290485       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 12:36:13.290513       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0908 12:36:13.290682       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0908 12:36:13.390622       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 08 12:45:12 no-preload-997730 kubelet[816]: E0908 12:45:12.787788     816 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 08 12:45:12 no-preload-997730 kubelet[816]: E0908 12:45:12.787948     816 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-vwd7n_kubernetes-dashboard(537a29c5-ffc1-49e3-8a70-737656b3a999): ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 08 12:45:12 no-preload-997730 kubelet[816]: E0908 12:45:12.787999     816 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vwd7n" podUID="537a29c5-ffc1-49e3-8a70-737656b3a999"
	Sep 08 12:45:14 no-preload-997730 kubelet[816]: I0908 12:45:14.685188     816 scope.go:117] "RemoveContainer" containerID="116cdbcd09687df3627cbd658b9f4625238bb89e6ce530511810377950cbe533"
	Sep 08 12:45:14 no-preload-997730 kubelet[816]: E0908 12:45:14.685433     816 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-c5f6j_kubernetes-dashboard(ee075aa8-c59d-4df7-9ff2-ed3037d29bd7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-c5f6j" podUID="ee075aa8-c59d-4df7-9ff2-ed3037d29bd7"
	Sep 08 12:45:18 no-preload-997730 kubelet[816]: E0908 12:45:18.788402     816 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757335518788113624  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 08 12:45:18 no-preload-997730 kubelet[816]: E0908 12:45:18.788442     816 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757335518788113624  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 08 12:45:20 no-preload-997730 kubelet[816]: E0908 12:45:20.686292     816 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-c8jxj" podUID="31558cc6-5760-42fa-85e9-9318bb3d4398"
	Sep 08 12:45:23 no-preload-997730 kubelet[816]: E0908 12:45:23.686144     816 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vwd7n" podUID="537a29c5-ffc1-49e3-8a70-737656b3a999"
	Sep 08 12:45:28 no-preload-997730 kubelet[816]: I0908 12:45:28.685861     816 scope.go:117] "RemoveContainer" containerID="116cdbcd09687df3627cbd658b9f4625238bb89e6ce530511810377950cbe533"
	Sep 08 12:45:28 no-preload-997730 kubelet[816]: E0908 12:45:28.686127     816 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-c5f6j_kubernetes-dashboard(ee075aa8-c59d-4df7-9ff2-ed3037d29bd7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-c5f6j" podUID="ee075aa8-c59d-4df7-9ff2-ed3037d29bd7"
	Sep 08 12:45:28 no-preload-997730 kubelet[816]: E0908 12:45:28.790422     816 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757335528790152988  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 08 12:45:28 no-preload-997730 kubelet[816]: E0908 12:45:28.790459     816 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757335528790152988  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 08 12:45:32 no-preload-997730 kubelet[816]: E0908 12:45:32.686232     816 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-c8jxj" podUID="31558cc6-5760-42fa-85e9-9318bb3d4398"
	Sep 08 12:45:37 no-preload-997730 kubelet[816]: E0908 12:45:37.686674     816 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vwd7n" podUID="537a29c5-ffc1-49e3-8a70-737656b3a999"
	Sep 08 12:45:38 no-preload-997730 kubelet[816]: E0908 12:45:38.792632     816 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757335538792345894  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 08 12:45:38 no-preload-997730 kubelet[816]: E0908 12:45:38.792680     816 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757335538792345894  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 08 12:45:41 no-preload-997730 kubelet[816]: I0908 12:45:41.685707     816 scope.go:117] "RemoveContainer" containerID="116cdbcd09687df3627cbd658b9f4625238bb89e6ce530511810377950cbe533"
	Sep 08 12:45:41 no-preload-997730 kubelet[816]: E0908 12:45:41.685907     816 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-c5f6j_kubernetes-dashboard(ee075aa8-c59d-4df7-9ff2-ed3037d29bd7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-c5f6j" podUID="ee075aa8-c59d-4df7-9ff2-ed3037d29bd7"
	Sep 08 12:45:46 no-preload-997730 kubelet[816]: E0908 12:45:46.686714     816 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-c8jxj" podUID="31558cc6-5760-42fa-85e9-9318bb3d4398"
	Sep 08 12:45:48 no-preload-997730 kubelet[816]: E0908 12:45:48.794051     816 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757335548793786596  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 08 12:45:48 no-preload-997730 kubelet[816]: E0908 12:45:48.794117     816 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757335548793786596  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 08 12:45:52 no-preload-997730 kubelet[816]: E0908 12:45:52.688817     816 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vwd7n" podUID="537a29c5-ffc1-49e3-8a70-737656b3a999"
	Sep 08 12:45:54 no-preload-997730 kubelet[816]: I0908 12:45:54.685285     816 scope.go:117] "RemoveContainer" containerID="116cdbcd09687df3627cbd658b9f4625238bb89e6ce530511810377950cbe533"
	Sep 08 12:45:54 no-preload-997730 kubelet[816]: E0908 12:45:54.685516     816 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-c5f6j_kubernetes-dashboard(ee075aa8-c59d-4df7-9ff2-ed3037d29bd7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-c5f6j" podUID="ee075aa8-c59d-4df7-9ff2-ed3037d29bd7"
	
	
	==> storage-provisioner [4b88512f9e94b34a706fe9465eeda8e748132a10da99bd52c7082bcde29020b4] <==
	W0908 12:45:31.494520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:45:33.498483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:45:33.502990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:45:35.506420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:45:35.512140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:45:37.515868       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:45:37.520719       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:45:39.524994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:45:39.531415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:45:41.535352       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:45:41.540754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:45:43.543886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:45:43.548134       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:45:45.551846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:45:45.556978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:45:47.561039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:45:47.567175       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:45:49.571171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:45:49.575965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:45:51.579499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:45:51.585465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:45:53.588805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:45:53.593406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:45:55.596576       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:45:55.602721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [cf99178b116a82cfd92234054f6617f745bf03bb9d326579538f99f84f849627] <==
	I0908 12:36:14.984575       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0908 12:36:44.991155       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-997730 -n no-preload-997730
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-997730 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-c8jxj kubernetes-dashboard-855c9754f9-vwd7n
helpers_test.go:282: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context no-preload-997730 describe pod metrics-server-746fcd58dc-c8jxj kubernetes-dashboard-855c9754f9-vwd7n
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context no-preload-997730 describe pod metrics-server-746fcd58dc-c8jxj kubernetes-dashboard-855c9754f9-vwd7n: exit status 1 (62.955163ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-c8jxj" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-vwd7n" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context no-preload-997730 describe pod metrics-server-746fcd58dc-c8jxj kubernetes-dashboard-855c9754f9-vwd7n: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4bds7" [81cc6553-f21a-4023-ba22-ee82ccc64adb] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0908 12:37:35.559808  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/custom-flannel-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:37:35.566429  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/custom-flannel-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:37:35.577936  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/custom-flannel-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:37:35.599461  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/custom-flannel-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:37:35.640972  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/custom-flannel-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:37:35.722553  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/custom-flannel-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:37:35.884098  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/custom-flannel-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:37:36.206045  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/custom-flannel-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:37:36.848077  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/custom-flannel-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:37:38.130372  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/custom-flannel-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:37:40.692392  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/custom-flannel-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:37:45.814567  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/custom-flannel-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:37:56.056429  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/custom-flannel-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:38:16.537899  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/custom-flannel-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:38:21.683362  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/auto-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:38:53.275292  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/kindnet-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:38:57.499751  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/custom-flannel-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:39:02.378494  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/flannel-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:39:02.384972  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/flannel-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:39:02.396684  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/flannel-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:39:02.418145  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/flannel-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:39:02.459629  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/flannel-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:39:02.541198  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/flannel-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:39:02.703421  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/flannel-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:39:03.025041  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/flannel-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:39:03.667334  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/flannel-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:39:04.949338  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/flannel-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:39:05.952142  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/enable-default-cni-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:39:05.958648  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/enable-default-cni-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:39:05.970149  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/enable-default-cni-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:39:05.991623  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/enable-default-cni-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:39:06.033134  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/enable-default-cni-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:39:06.114681  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/enable-default-cni-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:39:06.276376  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/enable-default-cni-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:39:06.598441  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/enable-default-cni-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:39:07.240542  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/enable-default-cni-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:39:07.511331  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/flannel-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:39:08.522578  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/enable-default-cni-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:39:11.084382  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/enable-default-cni-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:39:12.632860  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/flannel-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:39:16.206707  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/enable-default-cni-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:39:22.874502  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/flannel-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:39:26.448497  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/enable-default-cni-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:39:27.654789  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/functional-982703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:39:34.572211  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/bridge-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:39:34.578718  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/bridge-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:39:34.590427  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/bridge-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:39:34.611879  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/bridge-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:39:34.653356  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/bridge-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:39:34.734841  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/bridge-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:39:34.896412  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/bridge-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:39:35.218135  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/bridge-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:39:35.859682  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/bridge-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:39:37.141258  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/bridge-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:39:39.702799  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/bridge-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:39:43.356462  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/flannel-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:39:44.824522  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/bridge-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:39:46.930463  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/enable-default-cni-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:39:55.066520  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/bridge-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:40:15.548260  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/bridge-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:40:19.422077  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/custom-flannel-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:40:24.318792  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/flannel-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:40:27.892501  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/enable-default-cni-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:40:37.821427  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/auto-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:40:56.269843  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:40:56.510277  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/bridge-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:41:05.525760  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/auto-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:41:09.414045  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/kindnet-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:41:24.585554  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/functional-982703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:41:37.117056  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/kindnet-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:41:46.240186  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/flannel-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:41:49.814620  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/enable-default-cni-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:42:18.432320  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/bridge-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:42:35.559842  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/custom-flannel-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:43:03.263816  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/custom-flannel-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:44:02.378339  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/flannel-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:44:05.951226  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/enable-default-cni-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:44:30.081932  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/flannel-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:44:33.656564  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/enable-default-cni-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:44:34.572135  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/bridge-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:45:02.274482  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/bridge-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:45:37.822303  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/auto-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-039958 -n default-k8s-diff-port-039958
start_stop_delete_test.go:272: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-09-08 12:46:34.871383641 +0000 UTC m=+4414.164126110
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context default-k8s-diff-port-039958 describe po kubernetes-dashboard-855c9754f9-4bds7 -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context default-k8s-diff-port-039958 describe po kubernetes-dashboard-855c9754f9-4bds7 -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-4bds7
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             default-k8s-diff-port-039958/192.168.103.2
Start Time:       Mon, 08 Sep 2025 12:37:04 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-65z8c (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-65z8c:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  9m30s                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4bds7 to default-k8s-diff-port-039958
Normal   Pulling    4m33s (x5 over 9m29s)   kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     4m3s (x5 over 8m59s)    kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     4m3s (x5 over 8m59s)    kubelet            Error: ErrImagePull
Warning  Failed     2m59s (x16 over 8m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    114s (x21 over 8m59s)   kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context default-k8s-diff-port-039958 logs kubernetes-dashboard-855c9754f9-4bds7 -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-039958 logs kubernetes-dashboard-855c9754f9-4bds7 -n kubernetes-dashboard: exit status 1 (81.374717ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-4bds7" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context default-k8s-diff-port-039958 logs kubernetes-dashboard-855c9754f9-4bds7 -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-039958
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-039958:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "17ce0a9ee9fa542462b0d8dbad7ff9f52f6cbf753e1df8d514ebc3f81a5fc7d0",
	        "Created": "2025-09-08T12:35:12.669984605Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 943028,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-08T12:36:46.840310426Z",
	            "FinishedAt": "2025-09-08T12:36:45.956332479Z"
	        },
	        "Image": "sha256:863fa02c4a7dcd4571b30c16c1e6ae3eaa1d1a904931aac9472133ae3328c098",
	        "ResolvConfPath": "/var/lib/docker/containers/17ce0a9ee9fa542462b0d8dbad7ff9f52f6cbf753e1df8d514ebc3f81a5fc7d0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/17ce0a9ee9fa542462b0d8dbad7ff9f52f6cbf753e1df8d514ebc3f81a5fc7d0/hostname",
	        "HostsPath": "/var/lib/docker/containers/17ce0a9ee9fa542462b0d8dbad7ff9f52f6cbf753e1df8d514ebc3f81a5fc7d0/hosts",
	        "LogPath": "/var/lib/docker/containers/17ce0a9ee9fa542462b0d8dbad7ff9f52f6cbf753e1df8d514ebc3f81a5fc7d0/17ce0a9ee9fa542462b0d8dbad7ff9f52f6cbf753e1df8d514ebc3f81a5fc7d0-json.log",
	        "Name": "/default-k8s-diff-port-039958",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-039958:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-039958",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "17ce0a9ee9fa542462b0d8dbad7ff9f52f6cbf753e1df8d514ebc3f81a5fc7d0",
	                "LowerDir": "/var/lib/docker/overlay2/8d407d5572896c83cf6726cdeb26724de137d736428759a64b1384325408d1c9-init/diff:/var/lib/docker/overlay2/e9bf8e09f770f60ed4c810e0202629dccf6a4c304822cce5a8dffd900ae12eff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8d407d5572896c83cf6726cdeb26724de137d736428759a64b1384325408d1c9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8d407d5572896c83cf6726cdeb26724de137d736428759a64b1384325408d1c9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8d407d5572896c83cf6726cdeb26724de137d736428759a64b1384325408d1c9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-039958",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-039958/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-039958",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-039958",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-039958",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6b544872b161c42d21f7a410f5d652888f41e145f7481c7e3b7f536351443410",
	            "SandboxKey": "/var/run/docker/netns/6b544872b161",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33483"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33484"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33487"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33485"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33486"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-039958": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:f7:86:36:20:87",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4fa905f06d1f71ba90a71e371fa03071b5fb80803dc7a3e0fd9c709db8b2357f",
	                    "EndpointID": "dfadf3d1154058e3578a14a5abab544a8e725b8838a5ef759f6273dae5ee74d7",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-039958",
	                        "17ce0a9ee9fa"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-039958 -n default-k8s-diff-port-039958
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-039958 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-039958 logs -n 25: (1.358915663s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-283124 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ bridge-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:34 UTC │ 08 Sep 25 12:34 UTC │
	│ ssh     │ -p bridge-283124 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:34 UTC │ 08 Sep 25 12:34 UTC │
	│ ssh     │ -p bridge-283124 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:34 UTC │                     │
	│ ssh     │ -p bridge-283124 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:34 UTC │ 08 Sep 25 12:34 UTC │
	│ ssh     │ -p bridge-283124 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:34 UTC │ 08 Sep 25 12:35 UTC │
	│ ssh     │ -p bridge-283124 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:35 UTC │
	│ ssh     │ -p bridge-283124 sudo containerd config dump                                                                                                                                                                                                  │ bridge-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:35 UTC │
	│ ssh     │ -p bridge-283124 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:35 UTC │
	│ ssh     │ -p bridge-283124 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:35 UTC │
	│ ssh     │ -p bridge-283124 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:35 UTC │
	│ ssh     │ -p bridge-283124 sudo crio config                                                                                                                                                                                                             │ bridge-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:35 UTC │
	│ delete  │ -p bridge-283124                                                                                                                                                                                                                              │ bridge-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:35 UTC │
	│ start   │ -p default-k8s-diff-port-039958 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-039958 │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:36 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-896003 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-896003       │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:35 UTC │
	│ stop    │ -p old-k8s-version-896003 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-896003       │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:35 UTC │
	│ addons  │ enable metrics-server -p no-preload-997730 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-997730            │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:35 UTC │
	│ stop    │ -p no-preload-997730 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-997730            │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:36 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-896003 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-896003       │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:35 UTC │
	│ start   │ -p old-k8s-version-896003 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-896003       │ jenkins │ v1.36.0 │ 08 Sep 25 12:35 UTC │ 08 Sep 25 12:36 UTC │
	│ addons  │ enable dashboard -p no-preload-997730 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-997730            │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
	│ start   │ -p no-preload-997730 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                       │ no-preload-997730            │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-039958 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-039958 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
	│ stop    │ -p default-k8s-diff-port-039958 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-039958 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-039958 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-039958 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
	│ start   │ -p default-k8s-diff-port-039958 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-039958 │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:37 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 12:36:46
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 12:36:46.576701  942848 out.go:360] Setting OutFile to fd 1 ...
	I0908 12:36:46.576859  942848 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:36:46.576870  942848 out.go:374] Setting ErrFile to fd 2...
	I0908 12:36:46.576877  942848 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:36:46.577119  942848 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-614854/.minikube/bin
	I0908 12:36:46.577715  942848 out.go:368] Setting JSON to false
	I0908 12:36:46.579062  942848 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":11951,"bootTime":1757323056,"procs":302,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 12:36:46.579193  942848 start.go:140] virtualization: kvm guest
	I0908 12:36:46.581327  942848 out.go:179] * [default-k8s-diff-port-039958] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 12:36:46.582661  942848 out.go:179]   - MINIKUBE_LOCATION=21512
	I0908 12:36:46.582698  942848 notify.go:220] Checking for updates...
	I0908 12:36:46.584965  942848 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 12:36:46.586098  942848 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21512-614854/kubeconfig
	I0908 12:36:46.587326  942848 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-614854/.minikube
	I0908 12:36:46.588738  942848 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 12:36:46.590003  942848 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 12:36:46.591594  942848 config.go:182] Loaded profile config "default-k8s-diff-port-039958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 12:36:46.592226  942848 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 12:36:46.618634  942848 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 12:36:46.618773  942848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:36:46.679298  942848 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-09-08 12:36:46.668942756 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 12:36:46.679407  942848 docker.go:318] overlay module found
	I0908 12:36:46.681016  942848 out.go:179] * Using the docker driver based on existing profile
	I0908 12:36:46.682334  942848 start.go:304] selected driver: docker
	I0908 12:36:46.682353  942848 start.go:918] validating driver "docker" against &{Name:default-k8s-diff-port-039958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-039958 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountS
tring: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:36:46.682476  942848 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 12:36:46.683426  942848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:36:46.745282  942848 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-09-08 12:36:46.73243227 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 12:36:46.745663  942848 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 12:36:46.745700  942848 cni.go:84] Creating CNI manager for ""
	I0908 12:36:46.745763  942848 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 12:36:46.745814  942848 start.go:348] cluster config:
	{Name:default-k8s-diff-port-039958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-039958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:36:46.747972  942848 out.go:179] * Starting "default-k8s-diff-port-039958" primary control-plane node in "default-k8s-diff-port-039958" cluster
	I0908 12:36:46.749230  942848 cache.go:123] Beginning downloading kic base image for docker with crio
	I0908 12:36:46.750628  942848 out.go:179] * Pulling base image v0.0.47-1756980985-21488 ...
	I0908 12:36:46.751931  942848 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 12:36:46.751992  942848 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 12:36:46.752002  942848 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21512-614854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0908 12:36:46.752111  942848 cache.go:58] Caching tarball of preloaded images
	I0908 12:36:46.752219  942848 preload.go:172] Found /home/jenkins/minikube-integration/21512-614854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0908 12:36:46.752258  942848 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0908 12:36:46.752419  942848 profile.go:143] Saving config to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/default-k8s-diff-port-039958/config.json ...
	I0908 12:36:46.780591  942848 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon, skipping pull
	I0908 12:36:46.780624  942848 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in daemon, skipping load
	I0908 12:36:46.780647  942848 cache.go:232] Successfully downloaded all kic artifacts
	I0908 12:36:46.780682  942848 start.go:360] acquireMachinesLock for default-k8s-diff-port-039958: {Name:mk74fa9073ebc792abfeccea0efe5ebf172e66a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 12:36:46.780761  942848 start.go:364] duration metric: took 51.375µs to acquireMachinesLock for "default-k8s-diff-port-039958"
	I0908 12:36:46.780788  942848 start.go:96] Skipping create...Using existing machine configuration
	I0908 12:36:46.780799  942848 fix.go:54] fixHost starting: 
	I0908 12:36:46.781129  942848 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-039958 --format={{.State.Status}}
	I0908 12:36:46.803941  942848 fix.go:112] recreateIfNeeded on default-k8s-diff-port-039958: state=Stopped err=<nil>
	W0908 12:36:46.803983  942848 fix.go:138] unexpected machine state, will restart: <nil>
	W0908 12:36:42.758116  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:36:45.258155  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:36:42.045527  938712 pod_ready.go:104] pod "coredns-66bc5c9577-nd9km" is not "Ready", error: <nil>
	W0908 12:36:44.545681  938712 pod_ready.go:104] pod "coredns-66bc5c9577-nd9km" is not "Ready", error: <nil>
	W0908 12:36:46.546066  938712 pod_ready.go:104] pod "coredns-66bc5c9577-nd9km" is not "Ready", error: <nil>
	I0908 12:36:46.806070  942848 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-039958" ...
	I0908 12:36:46.806212  942848 cli_runner.go:164] Run: docker start default-k8s-diff-port-039958
	I0908 12:36:47.111853  942848 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-039958 --format={{.State.Status}}
	I0908 12:36:47.137411  942848 kic.go:430] container "default-k8s-diff-port-039958" state is running.
	I0908 12:36:47.137907  942848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-039958
	I0908 12:36:47.162432  942848 profile.go:143] Saving config to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/default-k8s-diff-port-039958/config.json ...
	I0908 12:36:47.162670  942848 machine.go:93] provisionDockerMachine start ...
	I0908 12:36:47.162747  942848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-039958
	I0908 12:36:47.185220  942848 main.go:141] libmachine: Using SSH client type: native
	I0908 12:36:47.185582  942848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I0908 12:36:47.185597  942848 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 12:36:47.186433  942848 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51208->127.0.0.1:33483: read: connection reset by peer
	I0908 12:36:50.319771  942848 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-039958
	
	I0908 12:36:50.319812  942848 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-039958"
	I0908 12:36:50.319874  942848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-039958
	I0908 12:36:50.341500  942848 main.go:141] libmachine: Using SSH client type: native
	I0908 12:36:50.341753  942848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I0908 12:36:50.341765  942848 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-039958 && echo "default-k8s-diff-port-039958" | sudo tee /etc/hostname
	I0908 12:36:50.492659  942848 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-039958
	
	I0908 12:36:50.492756  942848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-039958
	I0908 12:36:50.516857  942848 main.go:141] libmachine: Using SSH client type: native
	I0908 12:36:50.517256  942848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I0908 12:36:50.517301  942848 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-039958' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-039958/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-039958' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 12:36:50.644286  942848 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 12:36:50.644321  942848 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21512-614854/.minikube CaCertPath:/home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21512-614854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21512-614854/.minikube}
	I0908 12:36:50.644344  942848 ubuntu.go:190] setting up certificates
	I0908 12:36:50.644356  942848 provision.go:84] configureAuth start
	I0908 12:36:50.644424  942848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-039958
	I0908 12:36:50.662342  942848 provision.go:143] copyHostCerts
	I0908 12:36:50.662414  942848 exec_runner.go:144] found /home/jenkins/minikube-integration/21512-614854/.minikube/cert.pem, removing ...
	I0908 12:36:50.662431  942848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21512-614854/.minikube/cert.pem
	I0908 12:36:50.662496  942848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21512-614854/.minikube/cert.pem (1123 bytes)
	I0908 12:36:50.662596  942848 exec_runner.go:144] found /home/jenkins/minikube-integration/21512-614854/.minikube/key.pem, removing ...
	I0908 12:36:50.662605  942848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21512-614854/.minikube/key.pem
	I0908 12:36:50.662630  942848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21512-614854/.minikube/key.pem (1675 bytes)
	I0908 12:36:50.662714  942848 exec_runner.go:144] found /home/jenkins/minikube-integration/21512-614854/.minikube/ca.pem, removing ...
	I0908 12:36:50.662722  942848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21512-614854/.minikube/ca.pem
	I0908 12:36:50.662742  942848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21512-614854/.minikube/ca.pem (1082 bytes)
	I0908 12:36:50.662805  942848 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21512-614854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-039958 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-039958 localhost minikube]
	I0908 12:36:50.862531  942848 provision.go:177] copyRemoteCerts
	I0908 12:36:50.862604  942848 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 12:36:50.862646  942848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-039958
	I0908 12:36:50.885478  942848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/default-k8s-diff-port-039958/id_rsa Username:docker}
	I0908 12:36:50.986239  942848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 12:36:51.016291  942848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0908 12:36:51.045268  942848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0908 12:36:51.069675  942848 provision.go:87] duration metric: took 425.304221ms to configureAuth
	I0908 12:36:51.069704  942848 ubuntu.go:206] setting minikube options for container-runtime
	I0908 12:36:51.069902  942848 config.go:182] Loaded profile config "default-k8s-diff-port-039958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 12:36:51.070014  942848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-039958
	I0908 12:36:51.094609  942848 main.go:141] libmachine: Using SSH client type: native
	I0908 12:36:51.094825  942848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I0908 12:36:51.094845  942848 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0908 12:36:51.430315  942848 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0908 12:36:51.430362  942848 machine.go:96] duration metric: took 4.267670025s to provisionDockerMachine
	I0908 12:36:51.430380  942848 start.go:293] postStartSetup for "default-k8s-diff-port-039958" (driver="docker")
	I0908 12:36:51.430395  942848 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 12:36:51.430518  942848 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 12:36:51.430587  942848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-039958
	I0908 12:36:51.451170  942848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/default-k8s-diff-port-039958/id_rsa Username:docker}
	I0908 12:36:51.543737  942848 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 12:36:51.548216  942848 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0908 12:36:51.548260  942848 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0908 12:36:51.548273  942848 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0908 12:36:51.548282  942848 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0908 12:36:51.548296  942848 filesync.go:126] Scanning /home/jenkins/minikube-integration/21512-614854/.minikube/addons for local assets ...
	I0908 12:36:51.548366  942848 filesync.go:126] Scanning /home/jenkins/minikube-integration/21512-614854/.minikube/files for local assets ...
	I0908 12:36:51.548469  942848 filesync.go:149] local asset: /home/jenkins/minikube-integration/21512-614854/.minikube/files/etc/ssl/certs/6186202.pem -> 6186202.pem in /etc/ssl/certs
	I0908 12:36:51.548587  942848 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 12:36:51.558329  942848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/files/etc/ssl/certs/6186202.pem --> /etc/ssl/certs/6186202.pem (1708 bytes)
	W0908 12:36:47.258256  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:36:49.757606  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:36:51.758239  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:36:49.046133  938712 pod_ready.go:104] pod "coredns-66bc5c9577-nd9km" is not "Ready", error: <nil>
	W0908 12:36:51.081394  938712 pod_ready.go:104] pod "coredns-66bc5c9577-nd9km" is not "Ready", error: <nil>
	I0908 12:36:51.586425  942848 start.go:296] duration metric: took 156.023527ms for postStartSetup
	I0908 12:36:51.586525  942848 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 12:36:51.586571  942848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-039958
	I0908 12:36:51.613258  942848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/default-k8s-diff-port-039958/id_rsa Username:docker}
	I0908 12:36:51.705344  942848 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0908 12:36:51.711811  942848 fix.go:56] duration metric: took 4.931000802s for fixHost
	I0908 12:36:51.711849  942848 start.go:83] releasing machines lock for "default-k8s-diff-port-039958", held for 4.931072765s
	I0908 12:36:51.711931  942848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-039958
	I0908 12:36:51.734101  942848 ssh_runner.go:195] Run: cat /version.json
	I0908 12:36:51.734183  942848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-039958
	I0908 12:36:51.734267  942848 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 12:36:51.734367  942848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-039958
	I0908 12:36:51.754850  942848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/default-k8s-diff-port-039958/id_rsa Username:docker}
	I0908 12:36:51.755853  942848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/default-k8s-diff-port-039958/id_rsa Username:docker}
	I0908 12:36:51.923783  942848 ssh_runner.go:195] Run: systemctl --version
	I0908 12:36:51.929275  942848 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0908 12:36:52.084547  942848 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0908 12:36:52.090132  942848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 12:36:52.101273  942848 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0908 12:36:52.101378  942848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 12:36:52.111707  942848 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0908 12:36:52.111743  942848 start.go:495] detecting cgroup driver to use...
	I0908 12:36:52.111782  942848 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0908 12:36:52.111825  942848 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 12:36:52.126947  942848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 12:36:52.140290  942848 docker.go:218] disabling cri-docker service (if available) ...
	I0908 12:36:52.140371  942848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 12:36:52.154876  942848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 12:36:52.168633  942848 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 12:36:52.273095  942848 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 12:36:52.357717  942848 docker.go:234] disabling docker service ...
	I0908 12:36:52.357806  942848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 12:36:52.372526  942848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 12:36:52.385814  942848 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 12:36:52.476450  942848 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 12:36:52.566747  942848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 12:36:52.581723  942848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 12:36:52.605430  942848 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0908 12:36:52.605564  942848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:36:52.619096  942848 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0908 12:36:52.619198  942848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:36:52.632585  942848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:36:52.646076  942848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:36:52.658574  942848 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 12:36:52.668753  942848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:36:52.680494  942848 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:36:52.693152  942848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:36:52.705737  942848 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 12:36:52.715688  942848 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 12:36:52.725850  942848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:36:52.815349  942848 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0908 12:36:53.835349  942848 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.019888602s)
	I0908 12:36:53.835376  942848 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0908 12:36:53.835423  942848 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0908 12:36:53.839640  942848 start.go:563] Will wait 60s for crictl version
	I0908 12:36:53.839788  942848 ssh_runner.go:195] Run: which crictl
	I0908 12:36:53.844312  942848 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 12:36:53.880145  942848 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0908 12:36:53.880265  942848 ssh_runner.go:195] Run: crio --version
	I0908 12:36:53.927894  942848 ssh_runner.go:195] Run: crio --version
	I0908 12:36:53.977239  942848 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0908 12:36:52.577096  938712 pod_ready.go:94] pod "coredns-66bc5c9577-nd9km" is "Ready"
	I0908 12:36:52.577136  938712 pod_ready.go:86] duration metric: took 36.037680544s for pod "coredns-66bc5c9577-nd9km" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:36:52.582158  938712 pod_ready.go:83] waiting for pod "etcd-no-preload-997730" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:36:52.588939  938712 pod_ready.go:94] pod "etcd-no-preload-997730" is "Ready"
	I0908 12:36:52.588976  938712 pod_ready.go:86] duration metric: took 6.784149ms for pod "etcd-no-preload-997730" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:36:52.591480  938712 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-997730" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:36:52.598103  938712 pod_ready.go:94] pod "kube-apiserver-no-preload-997730" is "Ready"
	I0908 12:36:52.598137  938712 pod_ready.go:86] duration metric: took 6.627132ms for pod "kube-apiserver-no-preload-997730" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:36:52.601886  938712 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-997730" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:36:52.743487  938712 pod_ready.go:94] pod "kube-controller-manager-no-preload-997730" is "Ready"
	I0908 12:36:52.743515  938712 pod_ready.go:86] duration metric: took 141.597757ms for pod "kube-controller-manager-no-preload-997730" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:36:52.944115  938712 pod_ready.go:83] waiting for pod "kube-proxy-wqscj" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:36:53.342976  938712 pod_ready.go:94] pod "kube-proxy-wqscj" is "Ready"
	I0908 12:36:53.343007  938712 pod_ready.go:86] duration metric: took 398.863544ms for pod "kube-proxy-wqscj" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:36:53.543367  938712 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-997730" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:36:53.943688  938712 pod_ready.go:94] pod "kube-scheduler-no-preload-997730" is "Ready"
	I0908 12:36:53.943731  938712 pod_ready.go:86] duration metric: took 400.331351ms for pod "kube-scheduler-no-preload-997730" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:36:53.943745  938712 pod_ready.go:40] duration metric: took 37.408844643s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 12:36:54.001636  938712 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0908 12:36:54.003368  938712 out.go:179] * Done! kubectl is now configured to use "no-preload-997730" cluster and "default" namespace by default
	I0908 12:36:53.980801  942848 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-039958 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 12:36:54.005208  942848 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0908 12:36:54.009589  942848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 12:36:54.022563  942848 kubeadm.go:875] updating cluster {Name:default-k8s-diff-port-039958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-039958 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 12:36:54.022720  942848 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 12:36:54.022776  942848 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 12:36:54.076190  942848 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 12:36:54.076225  942848 crio.go:433] Images already preloaded, skipping extraction
	I0908 12:36:54.076295  942848 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 12:36:54.118904  942848 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 12:36:54.118932  942848 cache_images.go:85] Images are preloaded, skipping loading
	I0908 12:36:54.118943  942848 kubeadm.go:926] updating node { 192.168.103.2 8444 v1.34.0 crio true true} ...
	I0908 12:36:54.119083  942848 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-039958 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-039958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 12:36:54.119170  942848 ssh_runner.go:195] Run: crio config
	I0908 12:36:54.171743  942848 cni.go:84] Creating CNI manager for ""
	I0908 12:36:54.171768  942848 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 12:36:54.171782  942848 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 12:36:54.171813  942848 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-039958 NodeName:default-k8s-diff-port-039958 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 12:36:54.171991  942848 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-039958"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 12:36:54.172070  942848 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 12:36:54.182142  942848 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 12:36:54.182220  942848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 12:36:54.192725  942848 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0908 12:36:54.214079  942848 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 12:36:54.234494  942848 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I0908 12:36:54.255523  942848 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0908 12:36:54.260549  942848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 12:36:54.274598  942848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:36:54.363767  942848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 12:36:54.380309  942848 certs.go:68] Setting up /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/default-k8s-diff-port-039958 for IP: 192.168.103.2
	I0908 12:36:54.380327  942848 certs.go:194] generating shared ca certs ...
	I0908 12:36:54.380345  942848 certs.go:226] acquiring lock for ca certs: {Name:mkfa58237bde04d2b800b1fdade18ddbc226533f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:36:54.380497  942848 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21512-614854/.minikube/ca.key
	I0908 12:36:54.380536  942848 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21512-614854/.minikube/proxy-client-ca.key
	I0908 12:36:54.380543  942848 certs.go:256] generating profile certs ...
	I0908 12:36:54.380626  942848 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/default-k8s-diff-port-039958/client.key
	I0908 12:36:54.380670  942848 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/default-k8s-diff-port-039958/apiserver.key.b7da9e12
	I0908 12:36:54.380700  942848 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/default-k8s-diff-port-039958/proxy-client.key
	I0908 12:36:54.380808  942848 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/618620.pem (1338 bytes)
	W0908 12:36:54.380832  942848 certs.go:480] ignoring /home/jenkins/minikube-integration/21512-614854/.minikube/certs/618620_empty.pem, impossibly tiny 0 bytes
	I0908 12:36:54.380839  942848 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca-key.pem (1675 bytes)
	I0908 12:36:54.380860  942848 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem (1082 bytes)
	I0908 12:36:54.380878  942848 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/cert.pem (1123 bytes)
	I0908 12:36:54.380900  942848 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/key.pem (1675 bytes)
	I0908 12:36:54.380952  942848 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/files/etc/ssl/certs/6186202.pem (1708 bytes)
	I0908 12:36:54.381854  942848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 12:36:54.413826  942848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0908 12:36:54.444441  942848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 12:36:54.499191  942848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0908 12:36:54.595250  942848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/default-k8s-diff-port-039958/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0908 12:36:54.624909  942848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/default-k8s-diff-port-039958/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0908 12:36:54.652144  942848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/default-k8s-diff-port-039958/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 12:36:54.679150  942848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/default-k8s-diff-port-039958/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0908 12:36:54.706419  942848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/certs/618620.pem --> /usr/share/ca-certificates/618620.pem (1338 bytes)
	I0908 12:36:54.733331  942848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/files/etc/ssl/certs/6186202.pem --> /usr/share/ca-certificates/6186202.pem (1708 bytes)
	I0908 12:36:54.759761  942848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 12:36:54.786705  942848 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 12:36:54.808171  942848 ssh_runner.go:195] Run: openssl version
	I0908 12:36:54.814430  942848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 12:36:54.826103  942848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:36:54.830371  942848 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 11:33 /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:36:54.830445  942848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:36:54.838010  942848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 12:36:54.848205  942848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/618620.pem && ln -fs /usr/share/ca-certificates/618620.pem /etc/ssl/certs/618620.pem"
	I0908 12:36:54.859075  942848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/618620.pem
	I0908 12:36:54.863257  942848 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 11:43 /usr/share/ca-certificates/618620.pem
	I0908 12:36:54.863336  942848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/618620.pem
	I0908 12:36:54.871793  942848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/618620.pem /etc/ssl/certs/51391683.0"
	I0908 12:36:54.882122  942848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6186202.pem && ln -fs /usr/share/ca-certificates/6186202.pem /etc/ssl/certs/6186202.pem"
	I0908 12:36:54.894077  942848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6186202.pem
	I0908 12:36:54.898061  942848 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 11:43 /usr/share/ca-certificates/6186202.pem
	I0908 12:36:54.898134  942848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6186202.pem
	I0908 12:36:54.907305  942848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6186202.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 12:36:54.919955  942848 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 12:36:54.924868  942848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0908 12:36:54.932535  942848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0908 12:36:54.940947  942848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0908 12:36:54.949980  942848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0908 12:36:54.958562  942848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0908 12:36:54.967065  942848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0908 12:36:54.980901  942848 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-039958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-039958 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:36:54.981020  942848 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0908 12:36:54.981071  942848 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 12:36:55.092299  942848 cri.go:89] found id: ""
	I0908 12:36:55.092362  942848 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 12:36:55.105002  942848 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0908 12:36:55.105028  942848 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0908 12:36:55.105086  942848 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0908 12:36:55.180113  942848 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0908 12:36:55.181205  942848 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-039958" does not appear in /home/jenkins/minikube-integration/21512-614854/kubeconfig
	I0908 12:36:55.181925  942848 kubeconfig.go:62] /home/jenkins/minikube-integration/21512-614854/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-039958" cluster setting kubeconfig missing "default-k8s-diff-port-039958" context setting]
	I0908 12:36:55.182972  942848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/kubeconfig: {Name:mkdc8e576f612d76ffebaac15a9174cc9dea2917 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:36:55.184794  942848 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0908 12:36:55.203380  942848 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.103.2
	I0908 12:36:55.203435  942848 kubeadm.go:593] duration metric: took 98.400373ms to restartPrimaryControlPlane
	I0908 12:36:55.203451  942848 kubeadm.go:394] duration metric: took 222.56119ms to StartCluster
	I0908 12:36:55.203480  942848 settings.go:142] acquiring lock: {Name:mk9e6c1dbd6be0735fc1b1285e1ee30836d4ceb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:36:55.203583  942848 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21512-614854/kubeconfig
	I0908 12:36:55.205699  942848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/kubeconfig: {Name:mkdc8e576f612d76ffebaac15a9174cc9dea2917 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:36:55.206063  942848 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 12:36:55.206341  942848 config.go:182] Loaded profile config "default-k8s-diff-port-039958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 12:36:55.206406  942848 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0908 12:36:55.206498  942848 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-039958"
	I0908 12:36:55.206517  942848 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-039958"
	W0908 12:36:55.206526  942848 addons.go:247] addon storage-provisioner should already be in state true
	I0908 12:36:55.206558  942848 host.go:66] Checking if "default-k8s-diff-port-039958" exists ...
	I0908 12:36:55.207111  942848 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-039958 --format={{.State.Status}}
	I0908 12:36:55.207198  942848 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-039958"
	I0908 12:36:55.207239  942848 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-039958"
	I0908 12:36:55.207501  942848 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-039958"
	I0908 12:36:55.207521  942848 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-039958"
	W0908 12:36:55.207529  942848 addons.go:247] addon dashboard should already be in state true
	I0908 12:36:55.207568  942848 host.go:66] Checking if "default-k8s-diff-port-039958" exists ...
	I0908 12:36:55.207608  942848 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-039958 --format={{.State.Status}}
	I0908 12:36:55.207859  942848 out.go:179] * Verifying Kubernetes components...
	I0908 12:36:55.208037  942848 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-039958 --format={{.State.Status}}
	I0908 12:36:55.208345  942848 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-039958"
	I0908 12:36:55.208367  942848 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-039958"
	W0908 12:36:55.208375  942848 addons.go:247] addon metrics-server should already be in state true
	I0908 12:36:55.208414  942848 host.go:66] Checking if "default-k8s-diff-port-039958" exists ...
	I0908 12:36:55.208857  942848 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-039958 --format={{.State.Status}}
	I0908 12:36:55.209878  942848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:36:55.234038  942848 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-039958"
	W0908 12:36:55.234074  942848 addons.go:247] addon default-storageclass should already be in state true
	I0908 12:36:55.234113  942848 host.go:66] Checking if "default-k8s-diff-port-039958" exists ...
	I0908 12:36:55.234616  942848 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-039958 --format={{.State.Status}}
	I0908 12:36:55.234716  942848 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0908 12:36:55.235893  942848 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0908 12:36:55.235919  942848 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0908 12:36:55.235988  942848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-039958
	I0908 12:36:55.237895  942848 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 12:36:55.239291  942848 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 12:36:55.239317  942848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 12:36:55.239376  942848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-039958
	I0908 12:36:55.241236  942848 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0908 12:36:55.242448  942848 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0908 12:36:55.243535  942848 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0908 12:36:55.243556  942848 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0908 12:36:55.243627  942848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-039958
	I0908 12:36:55.259213  942848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/default-k8s-diff-port-039958/id_rsa Username:docker}
	I0908 12:36:55.261274  942848 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 12:36:55.261304  942848 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 12:36:55.261388  942848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-039958
	I0908 12:36:55.265130  942848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/default-k8s-diff-port-039958/id_rsa Username:docker}
	I0908 12:36:55.265428  942848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/default-k8s-diff-port-039958/id_rsa Username:docker}
	I0908 12:36:55.287889  942848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/default-k8s-diff-port-039958/id_rsa Username:docker}
	I0908 12:36:55.506429  942848 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0908 12:36:55.506482  942848 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0908 12:36:55.507123  942848 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0908 12:36:55.507149  942848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0908 12:36:55.676343  942848 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0908 12:36:55.676443  942848 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0908 12:36:55.679168  942848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 12:36:55.684795  942848 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0908 12:36:55.684827  942848 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0908 12:36:55.699825  942848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 12:36:55.778296  942848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 12:36:55.778917  942848 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 12:36:55.778944  942848 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0908 12:36:55.783526  942848 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0908 12:36:55.783560  942848 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0908 12:36:55.884019  942848 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0908 12:36:55.884050  942848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0908 12:36:55.885352  942848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 12:36:55.993916  942848 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0908 12:36:55.993953  942848 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0908 12:36:56.092995  942848 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0908 12:36:56.093029  942848 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0908 12:36:56.189962  942848 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0908 12:36:56.190002  942848 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0908 12:36:56.213231  942848 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0908 12:36:56.213277  942848 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0908 12:36:56.298377  942848 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0908 12:36:56.298412  942848 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0908 12:36:56.321438  942848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0908 12:36:53.758326  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:36:56.257789  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	I0908 12:37:01.113673  942848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.434453923s)
	I0908 12:37:01.113770  942848 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.413909704s)
	I0908 12:37:01.113807  942848 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-039958" to be "Ready" ...
	I0908 12:37:01.114220  942848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.335889805s)
	I0908 12:37:01.179947  942848 node_ready.go:49] node "default-k8s-diff-port-039958" is "Ready"
	I0908 12:37:01.179986  942848 node_ready.go:38] duration metric: took 66.160185ms for node "default-k8s-diff-port-039958" to be "Ready" ...
	I0908 12:37:01.180005  942848 api_server.go:52] waiting for apiserver process to appear ...
	I0908 12:37:01.180076  942848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 12:37:01.188491  942848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.303097359s)
	I0908 12:37:01.188538  942848 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-039958"
	I0908 12:37:01.188647  942848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.867162606s)
	I0908 12:37:01.190470  942848 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-039958 addons enable metrics-server
	
	I0908 12:37:01.192011  942848 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0908 12:37:01.193234  942848 addons.go:514] duration metric: took 5.986829567s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0908 12:37:01.196437  942848 api_server.go:72] duration metric: took 5.990326761s to wait for apiserver process to appear ...
	I0908 12:37:01.196458  942848 api_server.go:88] waiting for apiserver healthz status ...
	I0908 12:37:01.196476  942848 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I0908 12:37:01.201894  942848 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 12:37:01.201920  942848 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 12:36:58.258533  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:00.758093  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	I0908 12:37:01.696590  942848 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I0908 12:37:01.702086  942848 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 12:37:01.702131  942848 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 12:37:02.196683  942848 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I0908 12:37:02.203013  942848 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I0908 12:37:02.204330  942848 api_server.go:141] control plane version: v1.34.0
	I0908 12:37:02.204361  942848 api_server.go:131] duration metric: took 1.007896936s to wait for apiserver health ...
	I0908 12:37:02.204370  942848 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 12:37:02.208721  942848 system_pods.go:59] 9 kube-system pods found
	I0908 12:37:02.208782  942848 system_pods.go:61] "coredns-66bc5c9577-gb4rh" [6a5ce944-87ea-43fc-8e1b-6b7c6602d782] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 12:37:02.208795  942848 system_pods.go:61] "etcd-default-k8s-diff-port-039958" [a8523098-eac8-46a4-9c22-2a2bc216a18c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 12:37:02.208804  942848 system_pods.go:61] "kindnet-89lwp" [7f14a0af-69dc-410a-a7de-fd608eb510b7] Running
	I0908 12:37:02.208812  942848 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-039958" [94dee560-f2b4-4c2b-beb8-9f3b0042b381] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 12:37:02.208819  942848 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-039958" [570eff95-5de5-4d44-8c20-2d50d16c0d96] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 12:37:02.208831  942848 system_pods.go:61] "kube-proxy-cgrs8" [a898f47e-1688-4fb8-8f23-f1a05d5a1f33] Running
	I0908 12:37:02.208836  942848 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-039958" [d958fbef-3165-4429-b5a0-305a0ff21dc5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 12:37:02.208841  942848 system_pods.go:61] "metrics-server-746fcd58dc-hvqdm" [d648640c-2cab-4575-8290-51c39f0a19b3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 12:37:02.208844  942848 system_pods.go:61] "storage-provisioner" [89c03a4b-2580-4ee8-a683-d1523dd99fef] Running
	I0908 12:37:02.208850  942848 system_pods.go:74] duration metric: took 4.474582ms to wait for pod list to return data ...
	I0908 12:37:02.208861  942848 default_sa.go:34] waiting for default service account to be created ...
	I0908 12:37:02.211700  942848 default_sa.go:45] found service account: "default"
	I0908 12:37:02.211729  942848 default_sa.go:55] duration metric: took 2.854101ms for default service account to be created ...
	I0908 12:37:02.211739  942848 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 12:37:02.215028  942848 system_pods.go:86] 9 kube-system pods found
	I0908 12:37:02.215070  942848 system_pods.go:89] "coredns-66bc5c9577-gb4rh" [6a5ce944-87ea-43fc-8e1b-6b7c6602d782] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 12:37:02.215078  942848 system_pods.go:89] "etcd-default-k8s-diff-port-039958" [a8523098-eac8-46a4-9c22-2a2bc216a18c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 12:37:02.215083  942848 system_pods.go:89] "kindnet-89lwp" [7f14a0af-69dc-410a-a7de-fd608eb510b7] Running
	I0908 12:37:02.215088  942848 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-039958" [94dee560-f2b4-4c2b-beb8-9f3b0042b381] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 12:37:02.215095  942848 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-039958" [570eff95-5de5-4d44-8c20-2d50d16c0d96] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 12:37:02.215099  942848 system_pods.go:89] "kube-proxy-cgrs8" [a898f47e-1688-4fb8-8f23-f1a05d5a1f33] Running
	I0908 12:37:02.215105  942848 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-039958" [d958fbef-3165-4429-b5a0-305a0ff21dc5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 12:37:02.215109  942848 system_pods.go:89] "metrics-server-746fcd58dc-hvqdm" [d648640c-2cab-4575-8290-51c39f0a19b3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 12:37:02.215119  942848 system_pods.go:89] "storage-provisioner" [89c03a4b-2580-4ee8-a683-d1523dd99fef] Running
	I0908 12:37:02.215127  942848 system_pods.go:126] duration metric: took 3.381403ms to wait for k8s-apps to be running ...
	I0908 12:37:02.215134  942848 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 12:37:02.215182  942848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 12:37:02.228839  942848 system_svc.go:56] duration metric: took 13.689257ms WaitForService to wait for kubelet
	I0908 12:37:02.228878  942848 kubeadm.go:578] duration metric: took 7.022770217s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 12:37:02.228905  942848 node_conditions.go:102] verifying NodePressure condition ...
	I0908 12:37:02.232419  942848 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0908 12:37:02.232451  942848 node_conditions.go:123] node cpu capacity is 8
	I0908 12:37:02.232465  942848 node_conditions.go:105] duration metric: took 3.554674ms to run NodePressure ...
	I0908 12:37:02.232479  942848 start.go:241] waiting for startup goroutines ...
	I0908 12:37:02.232487  942848 start.go:246] waiting for cluster config update ...
	I0908 12:37:02.232498  942848 start.go:255] writing updated cluster config ...
	I0908 12:37:02.232770  942848 ssh_runner.go:195] Run: rm -f paused
	I0908 12:37:02.236948  942848 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 12:37:02.241091  942848 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gb4rh" in "kube-system" namespace to be "Ready" or be gone ...
	W0908 12:37:04.247344  942848 pod_ready.go:104] pod "coredns-66bc5c9577-gb4rh" is not "Ready", error: <nil>
	W0908 12:37:06.247957  942848 pod_ready.go:104] pod "coredns-66bc5c9577-gb4rh" is not "Ready", error: <nil>
	W0908 12:37:03.257787  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:05.757720  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:08.748018  942848 pod_ready.go:104] pod "coredns-66bc5c9577-gb4rh" is not "Ready", error: <nil>
	W0908 12:37:11.247224  942848 pod_ready.go:104] pod "coredns-66bc5c9577-gb4rh" is not "Ready", error: <nil>
	W0908 12:37:08.257784  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:10.757968  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:13.747206  942848 pod_ready.go:104] pod "coredns-66bc5c9577-gb4rh" is not "Ready", error: <nil>
	W0908 12:37:15.748096  942848 pod_ready.go:104] pod "coredns-66bc5c9577-gb4rh" is not "Ready", error: <nil>
	W0908 12:37:13.258116  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:15.757477  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:18.247360  942848 pod_ready.go:104] pod "coredns-66bc5c9577-gb4rh" is not "Ready", error: <nil>
	W0908 12:37:20.247841  942848 pod_ready.go:104] pod "coredns-66bc5c9577-gb4rh" is not "Ready", error: <nil>
	W0908 12:37:17.757872  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:19.757985  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:22.747356  942848 pod_ready.go:104] pod "coredns-66bc5c9577-gb4rh" is not "Ready", error: <nil>
	W0908 12:37:25.247866  942848 pod_ready.go:104] pod "coredns-66bc5c9577-gb4rh" is not "Ready", error: <nil>
	W0908 12:37:22.258365  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:24.757273  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:27.747272  942848 pod_ready.go:104] pod "coredns-66bc5c9577-gb4rh" is not "Ready", error: <nil>
	W0908 12:37:29.747903  942848 pod_ready.go:104] pod "coredns-66bc5c9577-gb4rh" is not "Ready", error: <nil>
	W0908 12:37:27.257759  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:29.758603  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:32.246724  942848 pod_ready.go:104] pod "coredns-66bc5c9577-gb4rh" is not "Ready", error: <nil>
	I0908 12:37:32.746600  942848 pod_ready.go:94] pod "coredns-66bc5c9577-gb4rh" is "Ready"
	I0908 12:37:32.746633  942848 pod_ready.go:86] duration metric: took 30.50551235s for pod "coredns-66bc5c9577-gb4rh" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:37:32.749803  942848 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-039958" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:37:32.754452  942848 pod_ready.go:94] pod "etcd-default-k8s-diff-port-039958" is "Ready"
	I0908 12:37:32.754481  942848 pod_ready.go:86] duration metric: took 4.650443ms for pod "etcd-default-k8s-diff-port-039958" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:37:32.757100  942848 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-039958" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:37:32.761953  942848 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-039958" is "Ready"
	I0908 12:37:32.761985  942848 pod_ready.go:86] duration metric: took 4.849995ms for pod "kube-apiserver-default-k8s-diff-port-039958" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:37:32.764191  942848 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-039958" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:37:32.945383  942848 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-039958" is "Ready"
	I0908 12:37:32.945422  942848 pod_ready.go:86] duration metric: took 181.203994ms for pod "kube-controller-manager-default-k8s-diff-port-039958" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:37:33.145058  942848 pod_ready.go:83] waiting for pod "kube-proxy-cgrs8" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:37:33.544898  942848 pod_ready.go:94] pod "kube-proxy-cgrs8" is "Ready"
	I0908 12:37:33.544927  942848 pod_ready.go:86] duration metric: took 399.833177ms for pod "kube-proxy-cgrs8" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:37:33.745634  942848 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-039958" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:37:34.144919  942848 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-039958" is "Ready"
	I0908 12:37:34.144965  942848 pod_ready.go:86] duration metric: took 399.29663ms for pod "kube-scheduler-default-k8s-diff-port-039958" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:37:34.144988  942848 pod_ready.go:40] duration metric: took 31.907998549s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 12:37:34.196309  942848 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0908 12:37:34.198553  942848 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-039958" cluster and "default" namespace by default
	W0908 12:37:32.257319  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:34.258404  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:36.757652  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:38.758275  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:41.257525  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:43.757901  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:45.758150  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:48.257273  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:50.257639  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:52.757594  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:54.758061  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:56.758378  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:37:59.258066  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:01.757513  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:03.758132  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:06.258116  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:08.757359  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:11.257772  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:13.258266  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:15.758076  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:18.258221  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:20.757456  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:22.757615  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:24.757944  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:27.257481  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:29.257676  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:31.757922  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:33.757998  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:35.758189  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:38.257284  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:40.258186  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:42.757563  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:44.758049  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:47.258155  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:49.758499  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:52.257549  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:54.257641  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:56.257796  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:38:58.758359  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:01.257901  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:03.757752  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:06.257817  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:08.258296  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:10.757713  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:13.258258  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:15.757976  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:18.257584  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:20.257682  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:22.758060  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:25.257404  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:27.257971  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:29.757975  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:32.257556  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:34.257819  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:36.757633  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:38.757871  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:41.257638  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:43.257970  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:45.757733  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:47.758232  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:49.758583  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:52.257803  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:54.257902  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:56.758212  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:39:59.257321  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:01.257592  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:03.757620  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:06.257707  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:08.757824  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:11.257105  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:13.257921  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:15.258039  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:17.758096  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:20.258070  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:22.757269  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:24.757608  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:27.257916  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:29.758141  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:32.257932  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:34.758358  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:37.257458  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:39.257731  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:41.758247  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:44.257810  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:46.258011  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:48.758378  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:51.257347  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:53.757974  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:56.258386  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:40:58.757745  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:00.758360  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:03.257917  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:05.758305  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:08.257694  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:10.757411  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:12.757802  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:15.258051  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:17.258437  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:19.758059  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:22.257165  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:24.257861  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:26.758229  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:29.257287  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:31.257940  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:33.757609  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:36.257193  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:38.257338  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:40.259086  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:42.757325  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:44.757506  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:46.757651  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:48.758048  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:51.257798  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:53.757260  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:55.758043  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:41:58.257673  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:00.757447  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:03.258213  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:05.758038  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:08.257935  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:10.757253  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:12.757315  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:14.758076  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:17.257358  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:19.257904  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:21.757955  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:23.758139  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:26.258024  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:28.758804  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:31.257119  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:33.257824  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:35.257908  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:37.757486  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:39.757547  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:41.757854  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:44.258038  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:46.258403  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:48.758374  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:51.257068  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:53.258291  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:55.757944  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:42:58.258146  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:00.258571  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:02.758004  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:05.257358  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:07.257469  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:09.258160  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:11.758090  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:14.257557  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:16.257748  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:18.258516  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:20.757930  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:23.257512  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:25.258146  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:27.757352  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:29.757963  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:32.257634  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:34.258269  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:36.758040  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:39.257138  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:41.257975  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:43.757450  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:45.758009  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:48.258119  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:50.757728  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:52.758476  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:55.258004  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:43:57.758245  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:00.257652  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:02.257901  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:04.758074  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:06.758528  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:09.257856  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:11.757537  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:13.758186  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:16.257671  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:18.257951  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:20.757717  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:22.758111  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:25.258291  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:27.758147  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:30.257141  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:32.257828  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:34.258099  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:36.757903  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:39.257811  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:41.757835  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:43.757896  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:46.257631  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:48.757919  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:51.258326  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:53.757689  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:56.257769  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:44:58.758237  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:01.257880  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:03.258125  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:05.758255  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:08.257563  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:10.758121  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:12.758503  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:15.257621  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:17.258405  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:19.758075  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:22.257402  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:24.258425  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:26.757955  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:28.758271  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:31.257461  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:33.258189  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:35.757165  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:37.757806  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:40.257927  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:42.258011  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:44.258220  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:46.758295  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:49.258312  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:51.758196  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:54.257814  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:56.758797  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:45:59.258173  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:46:01.757360  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:46:03.758217  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:46:06.257756  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:46:08.757562  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:46:11.258350  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:46:13.757334  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:46:15.757830  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:46:18.258351  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:46:20.758305  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:46:23.258066  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:46:25.758112  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:46:28.258058  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	W0908 12:46:30.758007  876341 node_ready.go:57] node "calico-283124" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Sep 08 12:45:06 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:45:06.506312139Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=f4373f38-f669-4ad3-b7e0-8a4efa557a47 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:45:09 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:45:09.506271152Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=d03388f3-f8aa-416c-a6d4-42b5805115f4 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:45:09 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:45:09.506512223Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=d03388f3-f8aa-416c-a6d4-42b5805115f4 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:45:20 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:45:20.506090950Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=47c2d062-26ab-4c6c-8560-6f96fcad3ade name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:45:20 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:45:20.506417673Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=47c2d062-26ab-4c6c-8560-6f96fcad3ade name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:45:20 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:45:20.507003829Z" level=info msg="Pulling image: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=28e58bc4-8f9b-40e6-b773-2ffdaea632f8 name=/runtime.v1.ImageService/PullImage
	Sep 08 12:45:20 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:45:20.508398403Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 08 12:45:24 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:45:24.506732184Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=77a38ab2-5af5-4ac7-91b3-fa8b1935f823 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:45:24 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:45:24.507058179Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=77a38ab2-5af5-4ac7-91b3-fa8b1935f823 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:45:35 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:45:35.505839098Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=eca0d953-e06c-4091-a919-3582b6282091 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:45:35 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:45:35.506163638Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=eca0d953-e06c-4091-a919-3582b6282091 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:45:47 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:45:47.506522537Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=bce14006-3f17-4384-8999-2e58b10e1a9b name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:45:47 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:45:47.506772033Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=bce14006-3f17-4384-8999-2e58b10e1a9b name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:46:01 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:46:01.506075617Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=427b168a-0840-4a6e-a087-d4a07efc4d8f name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:46:01 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:46:01.506362265Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=427b168a-0840-4a6e-a087-d4a07efc4d8f name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:46:04 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:46:04.506936028Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=b95bd2be-63d4-47ab-ba15-747730fb56cb name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:46:04 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:46:04.507337246Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=b95bd2be-63d4-47ab-ba15-747730fb56cb name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:46:13 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:46:13.508796575Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=6deb5c13-3355-4e84-b12a-8adc2d54eb36 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:46:13 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:46:13.509109570Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=6deb5c13-3355-4e84-b12a-8adc2d54eb36 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:46:15 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:46:15.505997190Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=ca45ba85-d3c2-461d-aa3c-e5964e8311c0 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:46:15 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:46:15.506333108Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=ca45ba85-d3c2-461d-aa3c-e5964e8311c0 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:46:28 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:46:28.505863343Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=b1e4756c-dc21-448f-98fc-137bf56940c5 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:46:28 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:46:28.506164098Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=b1e4756c-dc21-448f-98fc-137bf56940c5 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:46:30 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:46:30.506345260Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=4f28bd1b-39aa-4ef2-864e-dca91407ccf6 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:46:30 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:46:30.506688395Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=4f28bd1b-39aa-4ef2-864e-dca91407ccf6 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	bcf59505099f1       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   3 minutes ago       Exited              dashboard-metrics-scraper   6                   3a81e35b5123f       dashboard-metrics-scraper-6ffb444bf9-d9vtd
	7ccf934f01964       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner         2                   1ae25c12f7796       storage-provisioner
	95f26f1fc268d       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c   9 minutes ago       Running             busybox                     1                   11d2bb15190cf       busybox
	4f1209901005e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   9 minutes ago       Running             kindnet-cni                 1                   934e38b5787b0       kindnet-89lwp
	635968dd094ac       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Exited              storage-provisioner         1                   1ae25c12f7796       storage-provisioner
	935c541edad30       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   9 minutes ago       Running             kube-proxy                  1                   22569d0f2c042       kube-proxy-cgrs8
	2e649966366d7       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   9 minutes ago       Running             coredns                     1                   3fa4dbe083d4f       coredns-66bc5c9577-gb4rh
	cfa239a4247ff       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   9 minutes ago       Running             kube-controller-manager     1                   e057f4f3944db       kube-controller-manager-default-k8s-diff-port-039958
	9ba8aa1de66dd       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   9 minutes ago       Running             kube-scheduler              1                   accbbe8633db4       kube-scheduler-default-k8s-diff-port-039958
	95537c4837bf6       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   9 minutes ago       Running             kube-apiserver              1                   955317795ded0       kube-apiserver-default-k8s-diff-port-039958
	cc7207e2cb8e1       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   9 minutes ago       Running             etcd                        1                   11a0c9dbb498e       etcd-default-k8s-diff-port-039958
	
	
	==> coredns [2e649966366d752380cbb3e0cb8ec21cbe00581553b49ad9f2b8bc8219424879] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37144 - 748 "HINFO IN 980441565112902190.8224468124574063050. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.022189332s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-039958
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-039958
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a399eb27affc71ce2737faeeac659fc2ce938c64
	                    minikube.k8s.io/name=default-k8s-diff-port-039958
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T12_35_35_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 12:35:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-039958
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 12:46:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 12:42:55 +0000   Mon, 08 Sep 2025 12:35:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 12:42:55 +0000   Mon, 08 Sep 2025 12:35:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 12:42:55 +0000   Mon, 08 Sep 2025 12:35:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 12:42:55 +0000   Mon, 08 Sep 2025 12:36:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-039958
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	System Info:
	  Machine ID:                 fb4822e57e9a4bda9fd6d32cc8567a71
	  System UUID:                c1a4b17b-d533-4931-9b70-905556f15444
	  Boot ID:                    1bb31c1a-3b78-4ea1-9977-d0689f279875
	  Kernel Version:             5.15.0-1083-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-gb4rh                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     10m
	  kube-system                 etcd-default-k8s-diff-port-039958                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-89lwp                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-default-k8s-diff-port-039958             250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-039958    200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-cgrs8                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-default-k8s-diff-port-039958             100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-746fcd58dc-hvqdm                         100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         10m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-d9vtd              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m32s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-4bds7                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   Starting                 9m34s                  kube-proxy       
	  Normal   NodeHasSufficientPID     11m                    kubelet          Node default-k8s-diff-port-039958 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 11m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  11m                    kubelet          Node default-k8s-diff-port-039958 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m                    kubelet          Node default-k8s-diff-port-039958 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 11m                    kubelet          Starting kubelet.
	  Normal   RegisteredNode           10m                    node-controller  Node default-k8s-diff-port-039958 event: Registered Node default-k8s-diff-port-039958 in Controller
	  Normal   NodeReady                10m                    kubelet          Node default-k8s-diff-port-039958 status is now: NodeReady
	  Normal   Starting                 9m42s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m42s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m42s (x8 over 9m42s)  kubelet          Node default-k8s-diff-port-039958 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m42s (x8 over 9m42s)  kubelet          Node default-k8s-diff-port-039958 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m42s (x8 over 9m42s)  kubelet          Node default-k8s-diff-port-039958 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m32s                  node-controller  Node default-k8s-diff-port-039958 event: Registered Node default-k8s-diff-port-039958 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: fa 3c ce 84 95 92 8a f7 86 36 20 87 08 00
	[  +0.000689] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4fa905f06d1f
	[  +0.000005] ll header: 00000000: fa 3c ce 84 95 92 8a f7 86 36 20 87 08 00
	[Sep 8 12:37] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4fa905f06d1f
	[  +0.000008] ll header: 00000000: fa 3c ce 84 95 92 8a f7 86 36 20 87 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4fa905f06d1f
	[  +0.000002] ll header: 00000000: fa 3c ce 84 95 92 8a f7 86 36 20 87 08 00
	[  +2.015830] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4fa905f06d1f
	[  +0.000007] ll header: 00000000: fa 3c ce 84 95 92 8a f7 86 36 20 87 08 00
	[  +0.004384] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4fa905f06d1f
	[  +0.000005] ll header: 00000000: fa 3c ce 84 95 92 8a f7 86 36 20 87 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4fa905f06d1f
	[  +0.000002] ll header: 00000000: fa 3c ce 84 95 92 8a f7 86 36 20 87 08 00
	[  +4.123250] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4fa905f06d1f
	[  +0.000008] ll header: 00000000: fa 3c ce 84 95 92 8a f7 86 36 20 87 08 00
	[  +0.003960] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4fa905f06d1f
	[  +0.000006] ll header: 00000000: fa 3c ce 84 95 92 8a f7 86 36 20 87 08 00
	[  +0.000008] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4fa905f06d1f
	[  +0.000002] ll header: 00000000: fa 3c ce 84 95 92 8a f7 86 36 20 87 08 00
	[  +8.187331] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4fa905f06d1f
	[  +0.000006] ll header: 00000000: fa 3c ce 84 95 92 8a f7 86 36 20 87 08 00
	[  +0.003987] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4fa905f06d1f
	[  +0.000005] ll header: 00000000: fa 3c ce 84 95 92 8a f7 86 36 20 87 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-4fa905f06d1f
	[  +0.000002] ll header: 00000000: fa 3c ce 84 95 92 8a f7 86 36 20 87 08 00
	
	
	==> etcd [cc7207e2cb8e143c45822375891cfe394fc5a0816d16278c556c510b63826bbc] <==
	{"level":"warn","ts":"2025-09-08T12:36:57.494749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:57.502962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:57.510904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:57.583611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:57.592625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:57.601859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:57.609590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:57.683196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:57.690913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:57.699050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:57.723968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:57.776532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:57.784784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:57.793284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:57.803005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:57.813268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:57.876089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:57.884377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:57.893461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:57.910948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:57.918606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:57.980614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:57.989023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:57.997816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:58.105006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58392","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 12:46:36 up  3:28,  0 users,  load average: 0.21, 0.80, 1.55
	Linux default-k8s-diff-port-039958 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [4f1209901005ec30cd4049c8735ca3659731f92942cb51582531c6ce3676c955] <==
	I0908 12:44:31.288129       1 main.go:301] handling current node
	I0908 12:44:41.291821       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0908 12:44:41.291856       1 main.go:301] handling current node
	I0908 12:44:51.295813       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0908 12:44:51.295862       1 main.go:301] handling current node
	I0908 12:45:01.288473       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0908 12:45:01.288523       1 main.go:301] handling current node
	I0908 12:45:11.287259       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0908 12:45:11.287327       1 main.go:301] handling current node
	I0908 12:45:21.296145       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0908 12:45:21.296179       1 main.go:301] handling current node
	I0908 12:45:31.295839       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0908 12:45:31.295892       1 main.go:301] handling current node
	I0908 12:45:41.289329       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0908 12:45:41.289368       1 main.go:301] handling current node
	I0908 12:45:51.295750       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0908 12:45:51.295782       1 main.go:301] handling current node
	I0908 12:46:01.287417       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0908 12:46:01.287450       1 main.go:301] handling current node
	I0908 12:46:11.287926       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0908 12:46:11.287985       1 main.go:301] handling current node
	I0908 12:46:21.291788       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0908 12:46:21.291831       1 main.go:301] handling current node
	I0908 12:46:31.287687       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0908 12:46:31.287730       1 main.go:301] handling current node
	
	
	==> kube-apiserver [95537c4837bf61514d4f2e6f9aa4ef0a11b66d9c147b7817c16eeb8016929989] <==
	I0908 12:42:27.124245       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0908 12:42:59.908608       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 12:42:59.908666       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0908 12:42:59.908682       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0908 12:42:59.909717       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 12:42:59.909836       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0908 12:42:59.909855       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0908 12:43:30.845191       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:43:33.972179       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:44:55.950932       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:44:56.882298       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0908 12:44:59.909580       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 12:44:59.909634       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0908 12:44:59.909648       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0908 12:44:59.910819       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 12:44:59.910922       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0908 12:44:59.910940       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0908 12:46:01.967452       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:46:16.729074       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [cfa239a4247ff63369ba72728e0c8dcded1b41e1da1837fa9b00ec1565c72fa8] <==
	I0908 12:40:34.373834       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:41:04.307355       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:41:04.382346       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:41:34.312303       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:41:34.390613       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:42:04.318476       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:42:04.399060       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:42:34.323072       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:42:34.406946       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:43:04.327616       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:43:04.414551       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:43:34.332432       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:43:34.422204       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:44:04.337886       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:44:04.430903       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:44:34.342641       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:44:34.438686       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:45:04.347881       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:45:04.445510       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:45:34.352683       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:45:34.453506       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:46:04.358832       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:46:04.461638       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:46:34.363801       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:46:34.469295       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [935c541edad304c491ccb32b3c28fdada3be3f6a8ec4b9dea337c1ce6a25e312] <==
	I0908 12:37:01.004027       1 server_linux.go:53] "Using iptables proxy"
	I0908 12:37:01.229631       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 12:37:01.330514       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 12:37:01.330561       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E0908 12:37:01.330669       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 12:37:01.351166       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 12:37:01.351234       1 server_linux.go:132] "Using iptables Proxier"
	I0908 12:37:01.355489       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 12:37:01.355897       1 server.go:527] "Version info" version="v1.34.0"
	I0908 12:37:01.355914       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 12:37:01.356973       1 config.go:106] "Starting endpoint slice config controller"
	I0908 12:37:01.357086       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 12:37:01.356987       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 12:37:01.357167       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 12:37:01.356995       1 config.go:200] "Starting service config controller"
	I0908 12:37:01.357193       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 12:37:01.357051       1 config.go:309] "Starting node config controller"
	I0908 12:37:01.357219       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 12:37:01.357225       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 12:37:01.457824       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0908 12:37:01.457830       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 12:37:01.457887       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [9ba8aa1de66dd67173b1fe0009e5705648249811349c1d6abfeb23f588943eaf] <==
	I0908 12:36:56.616494       1 serving.go:386] Generated self-signed cert in-memory
	W0908 12:36:58.826652       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0908 12:36:58.826686       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0908 12:36:58.826696       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0908 12:36:58.826703       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0908 12:36:58.990632       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0908 12:36:58.990762       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 12:36:58.995308       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 12:36:58.995372       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 12:36:58.996067       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0908 12:36:58.996168       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0908 12:36:59.096535       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 08 12:45:50 default-k8s-diff-port-039958 kubelet[821]: E0908 12:45:50.593792     821 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4bds7" podUID="81cc6553-f21a-4023-ba22-ee82ccc64adb"
	Sep 08 12:45:54 default-k8s-diff-port-039958 kubelet[821]: E0908 12:45:54.664201     821 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757335554663890314  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 08 12:45:54 default-k8s-diff-port-039958 kubelet[821]: E0908 12:45:54.664243     821 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757335554663890314  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 08 12:45:55 default-k8s-diff-port-039958 kubelet[821]: I0908 12:45:55.505467     821 scope.go:117] "RemoveContainer" containerID="bcf59505099f197e06aa7c1aaeaf00558b4e6e0aafc695e99464c760a05b836b"
	Sep 08 12:45:55 default-k8s-diff-port-039958 kubelet[821]: E0908 12:45:55.505731     821 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-d9vtd_kubernetes-dashboard(c29d2f54-4eb1-4591-aca1-6507b6c73788)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d9vtd" podUID="c29d2f54-4eb1-4591-aca1-6507b6c73788"
	Sep 08 12:46:01 default-k8s-diff-port-039958 kubelet[821]: E0908 12:46:01.506772     821 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-hvqdm" podUID="d648640c-2cab-4575-8290-51c39f0a19b3"
	Sep 08 12:46:04 default-k8s-diff-port-039958 kubelet[821]: E0908 12:46:04.507793     821 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4bds7" podUID="81cc6553-f21a-4023-ba22-ee82ccc64adb"
	Sep 08 12:46:04 default-k8s-diff-port-039958 kubelet[821]: E0908 12:46:04.665293     821 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757335564665024858  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 08 12:46:04 default-k8s-diff-port-039958 kubelet[821]: E0908 12:46:04.665340     821 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757335564665024858  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 08 12:46:10 default-k8s-diff-port-039958 kubelet[821]: I0908 12:46:10.505728     821 scope.go:117] "RemoveContainer" containerID="bcf59505099f197e06aa7c1aaeaf00558b4e6e0aafc695e99464c760a05b836b"
	Sep 08 12:46:10 default-k8s-diff-port-039958 kubelet[821]: E0908 12:46:10.505978     821 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-d9vtd_kubernetes-dashboard(c29d2f54-4eb1-4591-aca1-6507b6c73788)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d9vtd" podUID="c29d2f54-4eb1-4591-aca1-6507b6c73788"
	Sep 08 12:46:13 default-k8s-diff-port-039958 kubelet[821]: E0908 12:46:13.510062     821 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-hvqdm" podUID="d648640c-2cab-4575-8290-51c39f0a19b3"
	Sep 08 12:46:14 default-k8s-diff-port-039958 kubelet[821]: E0908 12:46:14.666566     821 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757335574666321175  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 08 12:46:14 default-k8s-diff-port-039958 kubelet[821]: E0908 12:46:14.666603     821 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757335574666321175  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 08 12:46:15 default-k8s-diff-port-039958 kubelet[821]: E0908 12:46:15.506704     821 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4bds7" podUID="81cc6553-f21a-4023-ba22-ee82ccc64adb"
	Sep 08 12:46:22 default-k8s-diff-port-039958 kubelet[821]: I0908 12:46:22.505911     821 scope.go:117] "RemoveContainer" containerID="bcf59505099f197e06aa7c1aaeaf00558b4e6e0aafc695e99464c760a05b836b"
	Sep 08 12:46:22 default-k8s-diff-port-039958 kubelet[821]: E0908 12:46:22.506132     821 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-d9vtd_kubernetes-dashboard(c29d2f54-4eb1-4591-aca1-6507b6c73788)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d9vtd" podUID="c29d2f54-4eb1-4591-aca1-6507b6c73788"
	Sep 08 12:46:24 default-k8s-diff-port-039958 kubelet[821]: E0908 12:46:24.667979     821 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757335584667755657  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 08 12:46:24 default-k8s-diff-port-039958 kubelet[821]: E0908 12:46:24.668019     821 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757335584667755657  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 08 12:46:28 default-k8s-diff-port-039958 kubelet[821]: E0908 12:46:28.506476     821 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-hvqdm" podUID="d648640c-2cab-4575-8290-51c39f0a19b3"
	Sep 08 12:46:30 default-k8s-diff-port-039958 kubelet[821]: E0908 12:46:30.507091     821 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4bds7" podUID="81cc6553-f21a-4023-ba22-ee82ccc64adb"
	Sep 08 12:46:34 default-k8s-diff-port-039958 kubelet[821]: E0908 12:46:34.670787     821 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757335594669794587  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 08 12:46:34 default-k8s-diff-port-039958 kubelet[821]: E0908 12:46:34.671549     821 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757335594669794587  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 08 12:46:35 default-k8s-diff-port-039958 kubelet[821]: I0908 12:46:35.505290     821 scope.go:117] "RemoveContainer" containerID="bcf59505099f197e06aa7c1aaeaf00558b4e6e0aafc695e99464c760a05b836b"
	Sep 08 12:46:35 default-k8s-diff-port-039958 kubelet[821]: E0908 12:46:35.505512     821 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-d9vtd_kubernetes-dashboard(c29d2f54-4eb1-4591-aca1-6507b6c73788)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d9vtd" podUID="c29d2f54-4eb1-4591-aca1-6507b6c73788"
	
	
	==> storage-provisioner [635968dd094acd324b55a3848301449f01f2e1335bc2775c4064ec3ff9ef0a65] <==
	I0908 12:37:00.878178       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0908 12:37:30.882001       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [7ccf934f01964d65bd372427ec74cbe04850be842ff2c31f93e83c05f7335fa9] <==
	W0908 12:46:11.440249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:46:13.443365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:46:13.447718       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:46:15.451535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:46:15.455922       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:46:17.459517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:46:17.464116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:46:19.467608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:46:19.472065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:46:21.475894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:46:21.481288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:46:23.484422       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:46:23.489003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:46:25.492356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:46:25.498171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:46:27.501969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:46:27.506761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:46:29.509774       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:46:29.514689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:46:31.518695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:46:31.523747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:46:33.527092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:46:33.533830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:46:35.538160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:46:35.543116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-039958 -n default-k8s-diff-port-039958
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-039958 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-hvqdm kubernetes-dashboard-855c9754f9-4bds7
helpers_test.go:282: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-039958 describe pod metrics-server-746fcd58dc-hvqdm kubernetes-dashboard-855c9754f9-4bds7
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-039958 describe pod metrics-server-746fcd58dc-hvqdm kubernetes-dashboard-855c9754f9-4bds7: exit status 1 (68.483811ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-hvqdm" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-4bds7" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context default-k8s-diff-port-039958 describe pod metrics-server-746fcd58dc-hvqdm kubernetes-dashboard-855c9754f9-4bds7: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (542.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-7dbrb" [4704cc58-09c7-49d5-a649-d7e9fd6c1297] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-896003 -n old-k8s-version-896003
start_stop_delete_test.go:285: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-09-08 12:54:41.250906117 +0000 UTC m=+4900.543648589
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context old-k8s-version-896003 describe po kubernetes-dashboard-8694d4445c-7dbrb -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) kubectl --context old-k8s-version-896003 describe po kubernetes-dashboard-8694d4445c-7dbrb -n kubernetes-dashboard:
Name:             kubernetes-dashboard-8694d4445c-7dbrb
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             old-k8s-version-896003/192.168.85.2
Start Time:       Mon, 08 Sep 2025 12:36:15 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=8694d4445c
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/kubernetes-dashboard-8694d4445c
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zpgmq (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-zpgmq:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  18m                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7dbrb to old-k8s-version-896003
Normal   Pulling    15m (x4 over 18m)     kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     14m (x4 over 17m)     kubelet            Error: ErrImagePull
Warning  Failed     14m (x6 over 17m)     kubelet            Error: ImagePullBackOff
Warning  Failed     12m (x5 over 17m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    3m14s (x50 over 17m)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context old-k8s-version-896003 logs kubernetes-dashboard-8694d4445c-7dbrb -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context old-k8s-version-896003 logs kubernetes-dashboard-8694d4445c-7dbrb -n kubernetes-dashboard: exit status 1 (78.239652ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-8694d4445c-7dbrb" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:285: kubectl --context old-k8s-version-896003 logs kubernetes-dashboard-8694d4445c-7dbrb -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-896003 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-896003
helpers_test.go:243: (dbg) docker inspect old-k8s-version-896003:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b5a486cde8d67d69c8d4f800ed04c6d76b8c0508bba7be903886e32ef34bc8c7",
	        "Created": "2025-09-08T12:34:41.454905879Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 936820,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-08T12:35:51.787540238Z",
	            "FinishedAt": "2025-09-08T12:35:50.960911601Z"
	        },
	        "Image": "sha256:863fa02c4a7dcd4571b30c16c1e6ae3eaa1d1a904931aac9472133ae3328c098",
	        "ResolvConfPath": "/var/lib/docker/containers/b5a486cde8d67d69c8d4f800ed04c6d76b8c0508bba7be903886e32ef34bc8c7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b5a486cde8d67d69c8d4f800ed04c6d76b8c0508bba7be903886e32ef34bc8c7/hostname",
	        "HostsPath": "/var/lib/docker/containers/b5a486cde8d67d69c8d4f800ed04c6d76b8c0508bba7be903886e32ef34bc8c7/hosts",
	        "LogPath": "/var/lib/docker/containers/b5a486cde8d67d69c8d4f800ed04c6d76b8c0508bba7be903886e32ef34bc8c7/b5a486cde8d67d69c8d4f800ed04c6d76b8c0508bba7be903886e32ef34bc8c7-json.log",
	        "Name": "/old-k8s-version-896003",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-896003:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-896003",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b5a486cde8d67d69c8d4f800ed04c6d76b8c0508bba7be903886e32ef34bc8c7",
	                "LowerDir": "/var/lib/docker/overlay2/55405109ee2707194600695162bcd95b400c46c234120d59f9858c0ddffc7f9a-init/diff:/var/lib/docker/overlay2/e9bf8e09f770f60ed4c810e0202629dccf6a4c304822cce5a8dffd900ae12eff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/55405109ee2707194600695162bcd95b400c46c234120d59f9858c0ddffc7f9a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/55405109ee2707194600695162bcd95b400c46c234120d59f9858c0ddffc7f9a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/55405109ee2707194600695162bcd95b400c46c234120d59f9858c0ddffc7f9a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-896003",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-896003/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-896003",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-896003",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-896003",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b8ff3678afbcaaeea1c8d7fab6e289df9defe62e616e94ced89ff3f87425dfe2",
	            "SandboxKey": "/var/run/docker/netns/b8ff3678afbc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33473"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33474"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33477"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33475"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33476"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-896003": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:e3:f0:e5:43:e4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "64680b14c5747b6ba7a6a2cd81d8d5f27c97be0f750b792a24a5e67bd1710746",
	                    "EndpointID": "f7da3850caf4bf0d9e10d9ddee68a0c968d16a3f27216496844aa00a2a9cfe82",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-896003",
	                        "b5a486cde8d6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-896003 -n old-k8s-version-896003
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-896003 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-896003 logs -n 25: (1.274009444s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p calico-283124 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ calico-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:46 UTC │ 08 Sep 25 12:46 UTC │
	│ ssh     │ -p calico-283124 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ calico-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:46 UTC │ 08 Sep 25 12:46 UTC │
	│ ssh     │ -p calico-283124 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ calico-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:46 UTC │ 08 Sep 25 12:46 UTC │
	│ ssh     │ -p calico-283124 sudo containerd config dump                                                                                                                                                                                                  │ calico-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:46 UTC │ 08 Sep 25 12:46 UTC │
	│ ssh     │ -p calico-283124 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ calico-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:46 UTC │ 08 Sep 25 12:46 UTC │
	│ ssh     │ -p calico-283124 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ calico-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:46 UTC │ 08 Sep 25 12:46 UTC │
	│ ssh     │ -p calico-283124 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ calico-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:46 UTC │ 08 Sep 25 12:46 UTC │
	│ ssh     │ -p calico-283124 sudo crio config                                                                                                                                                                                                             │ calico-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:46 UTC │ 08 Sep 25 12:46 UTC │
	│ delete  │ -p calico-283124                                                                                                                                                                                                                              │ calico-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:46 UTC │ 08 Sep 25 12:46 UTC │
	│ start   │ -p newest-cni-139998 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0 │ newest-cni-139998            │ jenkins │ v1.36.0 │ 08 Sep 25 12:46 UTC │ 08 Sep 25 12:47 UTC │
	│ addons  │ enable metrics-server -p newest-cni-139998 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-139998            │ jenkins │ v1.36.0 │ 08 Sep 25 12:47 UTC │ 08 Sep 25 12:47 UTC │
	│ stop    │ -p newest-cni-139998 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-139998            │ jenkins │ v1.36.0 │ 08 Sep 25 12:47 UTC │ 08 Sep 25 12:47 UTC │
	│ addons  │ enable dashboard -p newest-cni-139998 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-139998            │ jenkins │ v1.36.0 │ 08 Sep 25 12:47 UTC │ 08 Sep 25 12:47 UTC │
	│ start   │ -p newest-cni-139998 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0 │ newest-cni-139998            │ jenkins │ v1.36.0 │ 08 Sep 25 12:47 UTC │ 08 Sep 25 12:47 UTC │
	│ image   │ newest-cni-139998 image list --format=json                                                                                                                                                                                                    │ newest-cni-139998            │ jenkins │ v1.36.0 │ 08 Sep 25 12:47 UTC │ 08 Sep 25 12:47 UTC │
	│ pause   │ -p newest-cni-139998 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-139998            │ jenkins │ v1.36.0 │ 08 Sep 25 12:47 UTC │ 08 Sep 25 12:47 UTC │
	│ unpause │ -p newest-cni-139998 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-139998            │ jenkins │ v1.36.0 │ 08 Sep 25 12:47 UTC │ 08 Sep 25 12:47 UTC │
	│ delete  │ -p newest-cni-139998                                                                                                                                                                                                                          │ newest-cni-139998            │ jenkins │ v1.36.0 │ 08 Sep 25 12:47 UTC │ 08 Sep 25 12:47 UTC │
	│ delete  │ -p newest-cni-139998                                                                                                                                                                                                                          │ newest-cni-139998            │ jenkins │ v1.36.0 │ 08 Sep 25 12:47 UTC │ 08 Sep 25 12:47 UTC │
	│ delete  │ -p disable-driver-mounts-173021                                                                                                                                                                                                               │ disable-driver-mounts-173021 │ jenkins │ v1.36.0 │ 08 Sep 25 12:47 UTC │ 08 Sep 25 12:47 UTC │
	│ start   │ -p embed-certs-095356 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                        │ embed-certs-095356           │ jenkins │ v1.36.0 │ 08 Sep 25 12:47 UTC │ 08 Sep 25 12:49 UTC │
	│ addons  │ enable metrics-server -p embed-certs-095356 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-095356           │ jenkins │ v1.36.0 │ 08 Sep 25 12:49 UTC │ 08 Sep 25 12:49 UTC │
	│ stop    │ -p embed-certs-095356 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-095356           │ jenkins │ v1.36.0 │ 08 Sep 25 12:49 UTC │ 08 Sep 25 12:49 UTC │
	│ addons  │ enable dashboard -p embed-certs-095356 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-095356           │ jenkins │ v1.36.0 │ 08 Sep 25 12:49 UTC │ 08 Sep 25 12:49 UTC │
	│ start   │ -p embed-certs-095356 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                        │ embed-certs-095356           │ jenkins │ v1.36.0 │ 08 Sep 25 12:49 UTC │ 08 Sep 25 12:50 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 12:49:23
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 12:49:23.279400  968017 out.go:360] Setting OutFile to fd 1 ...
	I0908 12:49:23.279807  968017 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:49:23.279824  968017 out.go:374] Setting ErrFile to fd 2...
	I0908 12:49:23.279829  968017 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:49:23.280064  968017 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-614854/.minikube/bin
	I0908 12:49:23.280789  968017 out.go:368] Setting JSON to false
	I0908 12:49:23.282282  968017 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":12707,"bootTime":1757323056,"procs":271,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 12:49:23.282415  968017 start.go:140] virtualization: kvm guest
	I0908 12:49:23.284711  968017 out.go:179] * [embed-certs-095356] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 12:49:23.286739  968017 out.go:179]   - MINIKUBE_LOCATION=21512
	I0908 12:49:23.286750  968017 notify.go:220] Checking for updates...
	I0908 12:49:23.289669  968017 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 12:49:23.291064  968017 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21512-614854/kubeconfig
	I0908 12:49:23.292333  968017 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-614854/.minikube
	I0908 12:49:23.293647  968017 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 12:49:23.295067  968017 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 12:49:23.296896  968017 config.go:182] Loaded profile config "embed-certs-095356": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 12:49:23.297523  968017 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 12:49:23.323231  968017 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 12:49:23.323393  968017 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:49:23.377796  968017 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:true NGoroutines:73 SystemTime:2025-09-08 12:49:23.367734602 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 12:49:23.377913  968017 docker.go:318] overlay module found
	I0908 12:49:23.379836  968017 out.go:179] * Using the docker driver based on existing profile
	I0908 12:49:23.381063  968017 start.go:304] selected driver: docker
	I0908 12:49:23.381087  968017 start.go:918] validating driver "docker" against &{Name:embed-certs-095356 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:embed-certs-095356 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:49:23.381212  968017 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 12:49:23.382437  968017 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:49:23.441035  968017 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:true NGoroutines:73 SystemTime:2025-09-08 12:49:23.430451531 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 12:49:23.441421  968017 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 12:49:23.441475  968017 cni.go:84] Creating CNI manager for ""
	I0908 12:49:23.441548  968017 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 12:49:23.441616  968017 start.go:348] cluster config:
	{Name:embed-certs-095356 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:embed-certs-095356 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:49:23.444524  968017 out.go:179] * Starting "embed-certs-095356" primary control-plane node in "embed-certs-095356" cluster
	I0908 12:49:23.446148  968017 cache.go:123] Beginning downloading kic base image for docker with crio
	I0908 12:49:23.447633  968017 out.go:179] * Pulling base image v0.0.47-1756980985-21488 ...
	I0908 12:49:23.448890  968017 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 12:49:23.448967  968017 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21512-614854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0908 12:49:23.448984  968017 cache.go:58] Caching tarball of preloaded images
	I0908 12:49:23.449045  968017 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 12:49:23.449154  968017 preload.go:172] Found /home/jenkins/minikube-integration/21512-614854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0908 12:49:23.449170  968017 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0908 12:49:23.449314  968017 profile.go:143] Saving config to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/embed-certs-095356/config.json ...
	I0908 12:49:23.470704  968017 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon, skipping pull
	I0908 12:49:23.470727  968017 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in daemon, skipping load
	I0908 12:49:23.470746  968017 cache.go:232] Successfully downloaded all kic artifacts
	I0908 12:49:23.470778  968017 start.go:360] acquireMachinesLock for embed-certs-095356: {Name:mk9355040c36d7eff54da75f6473007cb8502c78 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 12:49:23.470872  968017 start.go:364] duration metric: took 46.58µs to acquireMachinesLock for "embed-certs-095356"
	I0908 12:49:23.470895  968017 start.go:96] Skipping create...Using existing machine configuration
	I0908 12:49:23.470902  968017 fix.go:54] fixHost starting: 
	I0908 12:49:23.471117  968017 cli_runner.go:164] Run: docker container inspect embed-certs-095356 --format={{.State.Status}}
	I0908 12:49:23.490230  968017 fix.go:112] recreateIfNeeded on embed-certs-095356: state=Stopped err=<nil>
	W0908 12:49:23.490302  968017 fix.go:138] unexpected machine state, will restart: <nil>
	I0908 12:49:23.492246  968017 out.go:252] * Restarting existing docker container for "embed-certs-095356" ...
	I0908 12:49:23.492346  968017 cli_runner.go:164] Run: docker start embed-certs-095356
	I0908 12:49:23.750403  968017 cli_runner.go:164] Run: docker container inspect embed-certs-095356 --format={{.State.Status}}
	I0908 12:49:23.769795  968017 kic.go:430] container "embed-certs-095356" state is running.
	I0908 12:49:23.770316  968017 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-095356
	I0908 12:49:23.790284  968017 profile.go:143] Saving config to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/embed-certs-095356/config.json ...
	I0908 12:49:23.790565  968017 machine.go:93] provisionDockerMachine start ...
	I0908 12:49:23.790652  968017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-095356
	I0908 12:49:23.813467  968017 main.go:141] libmachine: Using SSH client type: native
	I0908 12:49:23.813785  968017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33503 <nil> <nil>}
	I0908 12:49:23.813800  968017 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 12:49:23.814519  968017 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48384->127.0.0.1:33503: read: connection reset by peer
	I0908 12:49:26.939977  968017 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-095356
	
	I0908 12:49:26.940015  968017 ubuntu.go:182] provisioning hostname "embed-certs-095356"
	I0908 12:49:26.940103  968017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-095356
	I0908 12:49:26.960115  968017 main.go:141] libmachine: Using SSH client type: native
	I0908 12:49:26.960359  968017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33503 <nil> <nil>}
	I0908 12:49:26.960375  968017 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-095356 && echo "embed-certs-095356" | sudo tee /etc/hostname
	I0908 12:49:27.098216  968017 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-095356
	
	I0908 12:49:27.098349  968017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-095356
	I0908 12:49:27.119969  968017 main.go:141] libmachine: Using SSH client type: native
	I0908 12:49:27.120236  968017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33503 <nil> <nil>}
	I0908 12:49:27.120258  968017 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-095356' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-095356/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-095356' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 12:49:27.244836  968017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 12:49:27.244884  968017 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21512-614854/.minikube CaCertPath:/home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21512-614854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21512-614854/.minikube}
	I0908 12:49:27.244919  968017 ubuntu.go:190] setting up certificates
	I0908 12:49:27.244946  968017 provision.go:84] configureAuth start
	I0908 12:49:27.245061  968017 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-095356
	I0908 12:49:27.264701  968017 provision.go:143] copyHostCerts
	I0908 12:49:27.264782  968017 exec_runner.go:144] found /home/jenkins/minikube-integration/21512-614854/.minikube/key.pem, removing ...
	I0908 12:49:27.264800  968017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21512-614854/.minikube/key.pem
	I0908 12:49:27.264866  968017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21512-614854/.minikube/key.pem (1675 bytes)
	I0908 12:49:27.264984  968017 exec_runner.go:144] found /home/jenkins/minikube-integration/21512-614854/.minikube/ca.pem, removing ...
	I0908 12:49:27.264995  968017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21512-614854/.minikube/ca.pem
	I0908 12:49:27.265021  968017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21512-614854/.minikube/ca.pem (1082 bytes)
	I0908 12:49:27.265070  968017 exec_runner.go:144] found /home/jenkins/minikube-integration/21512-614854/.minikube/cert.pem, removing ...
	I0908 12:49:27.265077  968017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21512-614854/.minikube/cert.pem
	I0908 12:49:27.265098  968017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21512-614854/.minikube/cert.pem (1123 bytes)
	I0908 12:49:27.265147  968017 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21512-614854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca-key.pem org=jenkins.embed-certs-095356 san=[127.0.0.1 192.168.76.2 embed-certs-095356 localhost minikube]
	I0908 12:49:27.478954  968017 provision.go:177] copyRemoteCerts
	I0908 12:49:27.479034  968017 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 12:49:27.479072  968017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-095356
	I0908 12:49:27.497777  968017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/embed-certs-095356/id_rsa Username:docker}
	I0908 12:49:27.594551  968017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 12:49:27.622480  968017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0908 12:49:27.650190  968017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0908 12:49:27.677556  968017 provision.go:87] duration metric: took 432.588736ms to configureAuth
	I0908 12:49:27.677589  968017 ubuntu.go:206] setting minikube options for container-runtime
	I0908 12:49:27.677815  968017 config.go:182] Loaded profile config "embed-certs-095356": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 12:49:27.677938  968017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-095356
	I0908 12:49:27.698245  968017 main.go:141] libmachine: Using SSH client type: native
	I0908 12:49:27.698549  968017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33503 <nil> <nil>}
	I0908 12:49:27.698567  968017 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0908 12:49:28.026101  968017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0908 12:49:28.026153  968017 machine.go:96] duration metric: took 4.235551829s to provisionDockerMachine
	I0908 12:49:28.026167  968017 start.go:293] postStartSetup for "embed-certs-095356" (driver="docker")
	I0908 12:49:28.026181  968017 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 12:49:28.026243  968017 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 12:49:28.026301  968017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-095356
	I0908 12:49:28.047864  968017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/embed-certs-095356/id_rsa Username:docker}
	I0908 12:49:28.141987  968017 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 12:49:28.146300  968017 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0908 12:49:28.146346  968017 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0908 12:49:28.146356  968017 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0908 12:49:28.146366  968017 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0908 12:49:28.146382  968017 filesync.go:126] Scanning /home/jenkins/minikube-integration/21512-614854/.minikube/addons for local assets ...
	I0908 12:49:28.146446  968017 filesync.go:126] Scanning /home/jenkins/minikube-integration/21512-614854/.minikube/files for local assets ...
	I0908 12:49:28.146562  968017 filesync.go:149] local asset: /home/jenkins/minikube-integration/21512-614854/.minikube/files/etc/ssl/certs/6186202.pem -> 6186202.pem in /etc/ssl/certs
	I0908 12:49:28.146690  968017 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 12:49:28.157179  968017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/files/etc/ssl/certs/6186202.pem --> /etc/ssl/certs/6186202.pem (1708 bytes)
	I0908 12:49:28.186965  968017 start.go:296] duration metric: took 160.778964ms for postStartSetup
	I0908 12:49:28.187059  968017 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 12:49:28.187106  968017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-095356
	I0908 12:49:28.206758  968017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/embed-certs-095356/id_rsa Username:docker}
	I0908 12:49:28.293425  968017 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0908 12:49:28.298894  968017 fix.go:56] duration metric: took 4.827979324s for fixHost
	I0908 12:49:28.298928  968017 start.go:83] releasing machines lock for "embed-certs-095356", held for 4.828041707s
	I0908 12:49:28.298991  968017 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-095356
	I0908 12:49:28.319080  968017 ssh_runner.go:195] Run: cat /version.json
	I0908 12:49:28.319159  968017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-095356
	I0908 12:49:28.319190  968017 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 12:49:28.319261  968017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-095356
	I0908 12:49:28.340864  968017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/embed-certs-095356/id_rsa Username:docker}
	I0908 12:49:28.342188  968017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/embed-certs-095356/id_rsa Username:docker}
	I0908 12:49:28.428053  968017 ssh_runner.go:195] Run: systemctl --version
	I0908 12:49:28.501265  968017 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0908 12:49:28.645284  968017 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0908 12:49:28.650558  968017 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 12:49:28.659998  968017 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0908 12:49:28.660080  968017 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 12:49:28.669203  968017 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0908 12:49:28.669230  968017 start.go:495] detecting cgroup driver to use...
	I0908 12:49:28.669266  968017 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0908 12:49:28.669311  968017 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 12:49:28.681994  968017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 12:49:28.695114  968017 docker.go:218] disabling cri-docker service (if available) ...
	I0908 12:49:28.695194  968017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 12:49:28.708625  968017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 12:49:28.720641  968017 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 12:49:28.798301  968017 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 12:49:28.880045  968017 docker.go:234] disabling docker service ...
	I0908 12:49:28.880123  968017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 12:49:28.892469  968017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 12:49:28.903906  968017 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 12:49:28.991744  968017 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 12:49:29.072520  968017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 12:49:29.086635  968017 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 12:49:29.104777  968017 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0908 12:49:29.104847  968017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:49:29.115495  968017 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0908 12:49:29.115587  968017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:49:29.126120  968017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:49:29.136593  968017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:49:29.148026  968017 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 12:49:29.157553  968017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:49:29.168412  968017 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:49:29.178655  968017 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:49:29.189820  968017 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 12:49:29.198879  968017 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 12:49:29.208182  968017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:49:29.290790  968017 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0908 12:49:29.417281  968017 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0908 12:49:29.417384  968017 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0908 12:49:29.421280  968017 start.go:563] Will wait 60s for crictl version
	I0908 12:49:29.421346  968017 ssh_runner.go:195] Run: which crictl
	I0908 12:49:29.425224  968017 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 12:49:29.463553  968017 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0908 12:49:29.463638  968017 ssh_runner.go:195] Run: crio --version
	I0908 12:49:29.506438  968017 ssh_runner.go:195] Run: crio --version
	I0908 12:49:29.547947  968017 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0908 12:49:29.549125  968017 cli_runner.go:164] Run: docker network inspect embed-certs-095356 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 12:49:29.567251  968017 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0908 12:49:29.571559  968017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 12:49:29.583638  968017 kubeadm.go:875] updating cluster {Name:embed-certs-095356 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:embed-certs-095356 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 12:49:29.583786  968017 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 12:49:29.583863  968017 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 12:49:29.628331  968017 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 12:49:29.628362  968017 crio.go:433] Images already preloaded, skipping extraction
	I0908 12:49:29.628431  968017 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 12:49:29.667577  968017 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 12:49:29.667607  968017 cache_images.go:85] Images are preloaded, skipping loading
	I0908 12:49:29.667618  968017 kubeadm.go:926] updating node { 192.168.76.2 8443 v1.34.0 crio true true} ...
	I0908 12:49:29.667774  968017 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-095356 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:embed-certs-095356 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 12:49:29.667845  968017 ssh_runner.go:195] Run: crio config
	I0908 12:49:29.714731  968017 cni.go:84] Creating CNI manager for ""
	I0908 12:49:29.714763  968017 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 12:49:29.714778  968017 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 12:49:29.714806  968017 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-095356 NodeName:embed-certs-095356 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 12:49:29.714964  968017 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-095356"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 12:49:29.715064  968017 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 12:49:29.724537  968017 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 12:49:29.724606  968017 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 12:49:29.734183  968017 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0908 12:49:29.752695  968017 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 12:49:29.770346  968017 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I0908 12:49:29.788189  968017 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0908 12:49:29.792659  968017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 12:49:29.806295  968017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:49:29.885492  968017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 12:49:29.899924  968017 certs.go:68] Setting up /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/embed-certs-095356 for IP: 192.168.76.2
	I0908 12:49:29.899947  968017 certs.go:194] generating shared ca certs ...
	I0908 12:49:29.899965  968017 certs.go:226] acquiring lock for ca certs: {Name:mkfa58237bde04d2b800b1fdade18ddbc226533f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:49:29.900170  968017 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21512-614854/.minikube/ca.key
	I0908 12:49:29.900232  968017 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21512-614854/.minikube/proxy-client-ca.key
	I0908 12:49:29.900244  968017 certs.go:256] generating profile certs ...
	I0908 12:49:29.900397  968017 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/embed-certs-095356/client.key
	I0908 12:49:29.900479  968017 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/embed-certs-095356/apiserver.key.351e8f67
	I0908 12:49:29.900529  968017 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/embed-certs-095356/proxy-client.key
	I0908 12:49:29.900673  968017 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/618620.pem (1338 bytes)
	W0908 12:49:29.900723  968017 certs.go:480] ignoring /home/jenkins/minikube-integration/21512-614854/.minikube/certs/618620_empty.pem, impossibly tiny 0 bytes
	I0908 12:49:29.900738  968017 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca-key.pem (1675 bytes)
	I0908 12:49:29.900773  968017 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem (1082 bytes)
	I0908 12:49:29.900804  968017 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/cert.pem (1123 bytes)
	I0908 12:49:29.900834  968017 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/key.pem (1675 bytes)
	I0908 12:49:29.900885  968017 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/files/etc/ssl/certs/6186202.pem (1708 bytes)
	I0908 12:49:29.901844  968017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 12:49:29.929236  968017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0908 12:49:29.955283  968017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 12:49:30.001144  968017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0908 12:49:30.083811  968017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/embed-certs-095356/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0908 12:49:30.111611  968017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/embed-certs-095356/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0908 12:49:30.137544  968017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/embed-certs-095356/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 12:49:30.162987  968017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/embed-certs-095356/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0908 12:49:30.190308  968017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/files/etc/ssl/certs/6186202.pem --> /usr/share/ca-certificates/6186202.pem (1708 bytes)
	I0908 12:49:30.216267  968017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 12:49:30.241179  968017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/certs/618620.pem --> /usr/share/ca-certificates/618620.pem (1338 bytes)
	I0908 12:49:30.266532  968017 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 12:49:30.286676  968017 ssh_runner.go:195] Run: openssl version
	I0908 12:49:30.292793  968017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6186202.pem && ln -fs /usr/share/ca-certificates/6186202.pem /etc/ssl/certs/6186202.pem"
	I0908 12:49:30.302890  968017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6186202.pem
	I0908 12:49:30.307054  968017 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 11:43 /usr/share/ca-certificates/6186202.pem
	I0908 12:49:30.307137  968017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6186202.pem
	I0908 12:49:30.314839  968017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6186202.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 12:49:30.324591  968017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 12:49:30.334856  968017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:49:30.339200  968017 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 11:33 /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:49:30.339265  968017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:49:30.346720  968017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 12:49:30.356744  968017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/618620.pem && ln -fs /usr/share/ca-certificates/618620.pem /etc/ssl/certs/618620.pem"
	I0908 12:49:30.366464  968017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/618620.pem
	I0908 12:49:30.370295  968017 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 11:43 /usr/share/ca-certificates/618620.pem
	I0908 12:49:30.370359  968017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/618620.pem
	I0908 12:49:30.377461  968017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/618620.pem /etc/ssl/certs/51391683.0"
	I0908 12:49:30.387829  968017 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 12:49:30.392212  968017 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0908 12:49:30.399604  968017 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0908 12:49:30.406794  968017 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0908 12:49:30.415234  968017 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0908 12:49:30.424814  968017 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0908 12:49:30.433404  968017 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0908 12:49:30.441261  968017 kubeadm.go:392] StartCluster: {Name:embed-certs-095356 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:embed-certs-095356 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:49:30.441390  968017 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0908 12:49:30.441443  968017 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 12:49:30.495102  968017 cri.go:89] found id: ""
	I0908 12:49:30.495193  968017 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 12:49:30.507375  968017 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0908 12:49:30.507460  968017 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0908 12:49:30.507518  968017 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0908 12:49:30.525592  968017 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0908 12:49:30.526663  968017 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-095356" does not appear in /home/jenkins/minikube-integration/21512-614854/kubeconfig
	I0908 12:49:30.527123  968017 kubeconfig.go:62] /home/jenkins/minikube-integration/21512-614854/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-095356" cluster setting kubeconfig missing "embed-certs-095356" context setting]
	I0908 12:49:30.527890  968017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/kubeconfig: {Name:mkdc8e576f612d76ffebaac15a9174cc9dea2917 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:49:30.529807  968017 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0908 12:49:30.589773  968017 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.76.2
	I0908 12:49:30.589818  968017 kubeadm.go:593] duration metric: took 82.347027ms to restartPrimaryControlPlane
	I0908 12:49:30.589831  968017 kubeadm.go:394] duration metric: took 148.584231ms to StartCluster
	I0908 12:49:30.589855  968017 settings.go:142] acquiring lock: {Name:mk9e6c1dbd6be0735fc1b1285e1ee30836d4ceb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:49:30.589960  968017 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21512-614854/kubeconfig
	I0908 12:49:30.592381  968017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/kubeconfig: {Name:mkdc8e576f612d76ffebaac15a9174cc9dea2917 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:49:30.592824  968017 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 12:49:30.593255  968017 config.go:182] Loaded profile config "embed-certs-095356": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 12:49:30.593363  968017 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0908 12:49:30.593868  968017 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-095356"
	I0908 12:49:30.593970  968017 addons.go:69] Setting metrics-server=true in profile "embed-certs-095356"
	I0908 12:49:30.594004  968017 addons.go:238] Setting addon metrics-server=true in "embed-certs-095356"
	I0908 12:49:30.594027  968017 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-095356"
	W0908 12:49:30.594087  968017 addons.go:247] addon storage-provisioner should already be in state true
	W0908 12:49:30.594034  968017 addons.go:247] addon metrics-server should already be in state true
	I0908 12:49:30.594192  968017 host.go:66] Checking if "embed-certs-095356" exists ...
	I0908 12:49:30.594784  968017 cli_runner.go:164] Run: docker container inspect embed-certs-095356 --format={{.State.Status}}
	I0908 12:49:30.593949  968017 addons.go:69] Setting dashboard=true in profile "embed-certs-095356"
	I0908 12:49:30.594998  968017 addons.go:238] Setting addon dashboard=true in "embed-certs-095356"
	W0908 12:49:30.595035  968017 addons.go:247] addon dashboard should already be in state true
	I0908 12:49:30.593936  968017 addons.go:69] Setting default-storageclass=true in profile "embed-certs-095356"
	I0908 12:49:30.595128  968017 host.go:66] Checking if "embed-certs-095356" exists ...
	I0908 12:49:30.595190  968017 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-095356"
	I0908 12:49:30.595779  968017 cli_runner.go:164] Run: docker container inspect embed-certs-095356 --format={{.State.Status}}
	I0908 12:49:30.595794  968017 cli_runner.go:164] Run: docker container inspect embed-certs-095356 --format={{.State.Status}}
	I0908 12:49:30.596312  968017 out.go:179] * Verifying Kubernetes components...
	I0908 12:49:30.596702  968017 host.go:66] Checking if "embed-certs-095356" exists ...
	I0908 12:49:30.597381  968017 cli_runner.go:164] Run: docker container inspect embed-certs-095356 --format={{.State.Status}}
	I0908 12:49:30.598769  968017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:49:30.627242  968017 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0908 12:49:30.627336  968017 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 12:49:30.629053  968017 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0908 12:49:30.629102  968017 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0908 12:49:30.629240  968017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-095356
	I0908 12:49:30.629609  968017 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 12:49:30.629646  968017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 12:49:30.629710  968017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-095356
	I0908 12:49:30.630069  968017 addons.go:238] Setting addon default-storageclass=true in "embed-certs-095356"
	W0908 12:49:30.630103  968017 addons.go:247] addon default-storageclass should already be in state true
	I0908 12:49:30.630136  968017 host.go:66] Checking if "embed-certs-095356" exists ...
	I0908 12:49:30.630646  968017 cli_runner.go:164] Run: docker container inspect embed-certs-095356 --format={{.State.Status}}
	I0908 12:49:30.636302  968017 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0908 12:49:30.637830  968017 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0908 12:49:30.639045  968017 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0908 12:49:30.639070  968017 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0908 12:49:30.639141  968017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-095356
	I0908 12:49:30.654868  968017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/embed-certs-095356/id_rsa Username:docker}
	I0908 12:49:30.656480  968017 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 12:49:30.656502  968017 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 12:49:30.656564  968017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-095356
	I0908 12:49:30.657798  968017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/embed-certs-095356/id_rsa Username:docker}
	I0908 12:49:30.664788  968017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/embed-certs-095356/id_rsa Username:docker}
	I0908 12:49:30.677548  968017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/embed-certs-095356/id_rsa Username:docker}
	I0908 12:49:30.977337  968017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 12:49:30.993616  968017 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0908 12:49:30.993641  968017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0908 12:49:31.081647  968017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 12:49:31.089969  968017 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0908 12:49:31.090006  968017 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0908 12:49:31.178840  968017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 12:49:31.184530  968017 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0908 12:49:31.184567  968017 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0908 12:49:31.195426  968017 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0908 12:49:31.195464  968017 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0908 12:49:31.294073  968017 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0908 12:49:31.294120  968017 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0908 12:49:31.299350  968017 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 12:49:31.299386  968017 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0908 12:49:31.393389  968017 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0908 12:49:31.393423  968017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0908 12:49:31.397630  968017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 12:49:31.491283  968017 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0908 12:49:31.491319  968017 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0908 12:49:31.514037  968017 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0908 12:49:31.514072  968017 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0908 12:49:31.598921  968017 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0908 12:49:31.599030  968017 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0908 12:49:31.676276  968017 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0908 12:49:31.676308  968017 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0908 12:49:31.702967  968017 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0908 12:49:31.702996  968017 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0908 12:49:31.722900  968017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0908 12:49:34.500652  968017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.523264613s)
	I0908 12:49:34.500807  968017 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.419110464s)
	I0908 12:49:34.500856  968017 node_ready.go:35] waiting up to 6m0s for node "embed-certs-095356" to be "Ready" ...
	I0908 12:49:34.594680  968017 node_ready.go:49] node "embed-certs-095356" is "Ready"
	I0908 12:49:34.594723  968017 node_ready.go:38] duration metric: took 93.848547ms for node "embed-certs-095356" to be "Ready" ...
	I0908 12:49:34.594743  968017 api_server.go:52] waiting for apiserver process to appear ...
	I0908 12:49:34.594802  968017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 12:49:36.709062  968017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.530174828s)
	I0908 12:49:36.709183  968017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.311509171s)
	I0908 12:49:36.709220  968017 addons.go:479] Verifying addon metrics-server=true in "embed-certs-095356"
	I0908 12:49:36.709338  968017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.98638923s)
	I0908 12:49:36.709389  968017 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.114563032s)
	I0908 12:49:36.709423  968017 api_server.go:72] duration metric: took 6.116557346s to wait for apiserver process to appear ...
	I0908 12:49:36.709467  968017 api_server.go:88] waiting for apiserver healthz status ...
	I0908 12:49:36.709490  968017 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0908 12:49:36.711538  968017 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-095356 addons enable metrics-server
	
	I0908 12:49:36.713424  968017 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0908 12:49:36.714376  968017 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 12:49:36.714401  968017 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 12:49:36.714765  968017 addons.go:514] duration metric: took 6.121413185s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0908 12:49:37.209650  968017 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0908 12:49:37.215303  968017 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 12:49:37.215336  968017 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 12:49:37.709615  968017 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0908 12:49:37.715220  968017 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0908 12:49:37.716454  968017 api_server.go:141] control plane version: v1.34.0
	I0908 12:49:37.716483  968017 api_server.go:131] duration metric: took 1.007008535s to wait for apiserver health ...
	I0908 12:49:37.716492  968017 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 12:49:37.720243  968017 system_pods.go:59] 9 kube-system pods found
	I0908 12:49:37.720291  968017 system_pods.go:61] "coredns-66bc5c9577-vmqhr" [979cc534-4ad1-434a-9838-7712ba9213b4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 12:49:37.720308  968017 system_pods.go:61] "etcd-embed-certs-095356" [302f42a3-4f92-4028-9ef1-60a84815daef] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 12:49:37.720315  968017 system_pods.go:61] "kindnet-8grdw" [8f60ebac-5974-4e25-a33d-1c82b8618cbc] Running
	I0908 12:49:37.720321  968017 system_pods.go:61] "kube-apiserver-embed-certs-095356" [0b2fc077-f650-4ade-8239-810edf4b9d9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 12:49:37.720327  968017 system_pods.go:61] "kube-controller-manager-embed-certs-095356" [b08ed252-184b-4d37-a752-e16594aa4a38] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 12:49:37.720334  968017 system_pods.go:61] "kube-proxy-rk7d4" [7e92a5d1-aac6-48cb-9460-45706cda644d] Running
	I0908 12:49:37.720340  968017 system_pods.go:61] "kube-scheduler-embed-certs-095356" [0df658e9-5081-432d-a4a7-df0a452660ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 12:49:37.720348  968017 system_pods.go:61] "metrics-server-746fcd58dc-kw49k" [ccedbe2e-6b00-427c-aee5-d32661187320] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 12:49:37.720362  968017 system_pods.go:61] "storage-provisioner" [ef3ef697-29b5-4d95-9fce-0d4e14bf1575] Running
	I0908 12:49:37.720370  968017 system_pods.go:74] duration metric: took 3.871512ms to wait for pod list to return data ...
	I0908 12:49:37.720381  968017 default_sa.go:34] waiting for default service account to be created ...
	I0908 12:49:37.723566  968017 default_sa.go:45] found service account: "default"
	I0908 12:49:37.723599  968017 default_sa.go:55] duration metric: took 3.211119ms for default service account to be created ...
	I0908 12:49:37.723612  968017 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 12:49:37.726952  968017 system_pods.go:86] 9 kube-system pods found
	I0908 12:49:37.726991  968017 system_pods.go:89] "coredns-66bc5c9577-vmqhr" [979cc534-4ad1-434a-9838-7712ba9213b4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 12:49:37.727005  968017 system_pods.go:89] "etcd-embed-certs-095356" [302f42a3-4f92-4028-9ef1-60a84815daef] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 12:49:37.727018  968017 system_pods.go:89] "kindnet-8grdw" [8f60ebac-5974-4e25-a33d-1c82b8618cbc] Running
	I0908 12:49:37.727028  968017 system_pods.go:89] "kube-apiserver-embed-certs-095356" [0b2fc077-f650-4ade-8239-810edf4b9d9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 12:49:37.727037  968017 system_pods.go:89] "kube-controller-manager-embed-certs-095356" [b08ed252-184b-4d37-a752-e16594aa4a38] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 12:49:37.727047  968017 system_pods.go:89] "kube-proxy-rk7d4" [7e92a5d1-aac6-48cb-9460-45706cda644d] Running
	I0908 12:49:37.727056  968017 system_pods.go:89] "kube-scheduler-embed-certs-095356" [0df658e9-5081-432d-a4a7-df0a452660ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 12:49:37.727068  968017 system_pods.go:89] "metrics-server-746fcd58dc-kw49k" [ccedbe2e-6b00-427c-aee5-d32661187320] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 12:49:37.727077  968017 system_pods.go:89] "storage-provisioner" [ef3ef697-29b5-4d95-9fce-0d4e14bf1575] Running
	I0908 12:49:37.727090  968017 system_pods.go:126] duration metric: took 3.469285ms to wait for k8s-apps to be running ...
	I0908 12:49:37.727103  968017 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 12:49:37.727180  968017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 12:49:37.739211  968017 system_svc.go:56] duration metric: took 12.098934ms WaitForService to wait for kubelet
	I0908 12:49:37.739249  968017 kubeadm.go:578] duration metric: took 7.146380991s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 12:49:37.739275  968017 node_conditions.go:102] verifying NodePressure condition ...
	I0908 12:49:37.742737  968017 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0908 12:49:37.742768  968017 node_conditions.go:123] node cpu capacity is 8
	I0908 12:49:37.742783  968017 node_conditions.go:105] duration metric: took 3.502532ms to run NodePressure ...
	I0908 12:49:37.742797  968017 start.go:241] waiting for startup goroutines ...
	I0908 12:49:37.742806  968017 start.go:246] waiting for cluster config update ...
	I0908 12:49:37.742820  968017 start.go:255] writing updated cluster config ...
	I0908 12:49:37.743123  968017 ssh_runner.go:195] Run: rm -f paused
	I0908 12:49:37.747094  968017 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 12:49:37.751159  968017 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vmqhr" in "kube-system" namespace to be "Ready" or be gone ...
	W0908 12:49:39.757202  968017 pod_ready.go:104] pod "coredns-66bc5c9577-vmqhr" is not "Ready", error: <nil>
	W0908 12:49:41.781793  968017 pod_ready.go:104] pod "coredns-66bc5c9577-vmqhr" is not "Ready", error: <nil>
	W0908 12:49:44.257272  968017 pod_ready.go:104] pod "coredns-66bc5c9577-vmqhr" is not "Ready", error: <nil>
	W0908 12:49:46.757216  968017 pod_ready.go:104] pod "coredns-66bc5c9577-vmqhr" is not "Ready", error: <nil>
	W0908 12:49:49.257899  968017 pod_ready.go:104] pod "coredns-66bc5c9577-vmqhr" is not "Ready", error: <nil>
	W0908 12:49:51.757748  968017 pod_ready.go:104] pod "coredns-66bc5c9577-vmqhr" is not "Ready", error: <nil>
	W0908 12:49:54.257571  968017 pod_ready.go:104] pod "coredns-66bc5c9577-vmqhr" is not "Ready", error: <nil>
	W0908 12:49:56.257624  968017 pod_ready.go:104] pod "coredns-66bc5c9577-vmqhr" is not "Ready", error: <nil>
	W0908 12:49:58.757397  968017 pod_ready.go:104] pod "coredns-66bc5c9577-vmqhr" is not "Ready", error: <nil>
	W0908 12:50:01.256722  968017 pod_ready.go:104] pod "coredns-66bc5c9577-vmqhr" is not "Ready", error: <nil>
	W0908 12:50:03.257559  968017 pod_ready.go:104] pod "coredns-66bc5c9577-vmqhr" is not "Ready", error: <nil>
	W0908 12:50:05.757014  968017 pod_ready.go:104] pod "coredns-66bc5c9577-vmqhr" is not "Ready", error: <nil>
	W0908 12:50:08.257679  968017 pod_ready.go:104] pod "coredns-66bc5c9577-vmqhr" is not "Ready", error: <nil>
	W0908 12:50:10.757606  968017 pod_ready.go:104] pod "coredns-66bc5c9577-vmqhr" is not "Ready", error: <nil>
	W0908 12:50:13.257013  968017 pod_ready.go:104] pod "coredns-66bc5c9577-vmqhr" is not "Ready", error: <nil>
	I0908 12:50:13.758380  968017 pod_ready.go:94] pod "coredns-66bc5c9577-vmqhr" is "Ready"
	I0908 12:50:13.758411  968017 pod_ready.go:86] duration metric: took 36.007221342s for pod "coredns-66bc5c9577-vmqhr" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:50:13.761685  968017 pod_ready.go:83] waiting for pod "etcd-embed-certs-095356" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:50:13.767420  968017 pod_ready.go:94] pod "etcd-embed-certs-095356" is "Ready"
	I0908 12:50:13.767459  968017 pod_ready.go:86] duration metric: took 5.743199ms for pod "etcd-embed-certs-095356" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:50:13.862817  968017 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-095356" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:50:13.869826  968017 pod_ready.go:94] pod "kube-apiserver-embed-certs-095356" is "Ready"
	I0908 12:50:13.869857  968017 pod_ready.go:86] duration metric: took 7.008074ms for pod "kube-apiserver-embed-certs-095356" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:50:13.872326  968017 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-095356" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:50:13.954638  968017 pod_ready.go:94] pod "kube-controller-manager-embed-certs-095356" is "Ready"
	I0908 12:50:13.954671  968017 pod_ready.go:86] duration metric: took 82.317504ms for pod "kube-controller-manager-embed-certs-095356" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:50:14.154975  968017 pod_ready.go:83] waiting for pod "kube-proxy-rk7d4" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:50:14.555035  968017 pod_ready.go:94] pod "kube-proxy-rk7d4" is "Ready"
	I0908 12:50:14.555083  968017 pod_ready.go:86] duration metric: took 400.07973ms for pod "kube-proxy-rk7d4" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:50:14.755134  968017 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-095356" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:50:15.155832  968017 pod_ready.go:94] pod "kube-scheduler-embed-certs-095356" is "Ready"
	I0908 12:50:15.155864  968017 pod_ready.go:86] duration metric: took 400.702953ms for pod "kube-scheduler-embed-certs-095356" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:50:15.155885  968017 pod_ready.go:40] duration metric: took 37.408745743s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 12:50:15.202619  968017 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0908 12:50:15.204368  968017 out.go:179] * Done! kubectl is now configured to use "embed-certs-095356" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 08 12:53:22 old-k8s-version-896003 crio[682]: time="2025-09-08 12:53:22.180957856Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=5597cef9-e783-4b60-838b-d4d206584d4a name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:53:28 old-k8s-version-896003 crio[682]: time="2025-09-08 12:53:28.179949001Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=3ab43bdc-b0ed-40a4-8ddb-b5490ea10ccb name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:53:28 old-k8s-version-896003 crio[682]: time="2025-09-08 12:53:28.180218874Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=3ab43bdc-b0ed-40a4-8ddb-b5490ea10ccb name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:53:35 old-k8s-version-896003 crio[682]: time="2025-09-08 12:53:35.180153434Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=c0828268-1b14-4f81-baa4-74365ba80595 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:53:35 old-k8s-version-896003 crio[682]: time="2025-09-08 12:53:35.180534694Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=c0828268-1b14-4f81-baa4-74365ba80595 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:53:43 old-k8s-version-896003 crio[682]: time="2025-09-08 12:53:43.180621271Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=f8a17d19-db38-4ac7-935d-275446957388 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:53:43 old-k8s-version-896003 crio[682]: time="2025-09-08 12:53:43.180902026Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=f8a17d19-db38-4ac7-935d-275446957388 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:53:48 old-k8s-version-896003 crio[682]: time="2025-09-08 12:53:48.180376954Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=efb0bf2f-d78b-438f-a140-c57e4e9f59a2 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:53:48 old-k8s-version-896003 crio[682]: time="2025-09-08 12:53:48.180660886Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=efb0bf2f-d78b-438f-a140-c57e4e9f59a2 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:53:58 old-k8s-version-896003 crio[682]: time="2025-09-08 12:53:58.180016604Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=adf0b915-e7cd-4c1d-b9e2-aa7e5f55a09e name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:53:58 old-k8s-version-896003 crio[682]: time="2025-09-08 12:53:58.180340412Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=adf0b915-e7cd-4c1d-b9e2-aa7e5f55a09e name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:54:00 old-k8s-version-896003 crio[682]: time="2025-09-08 12:54:00.179990724Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=17acf6b8-35f0-4b20-a36a-12fe62c94e11 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:54:00 old-k8s-version-896003 crio[682]: time="2025-09-08 12:54:00.180361127Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=17acf6b8-35f0-4b20-a36a-12fe62c94e11 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:54:10 old-k8s-version-896003 crio[682]: time="2025-09-08 12:54:10.180236716Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=1a7e9212-6ede-4df5-8e5d-bde316cfd0e4 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:54:10 old-k8s-version-896003 crio[682]: time="2025-09-08 12:54:10.180539654Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=1a7e9212-6ede-4df5-8e5d-bde316cfd0e4 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:54:13 old-k8s-version-896003 crio[682]: time="2025-09-08 12:54:13.180237022Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=6a26acf2-2302-4103-a144-e032715fcb9e name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:54:13 old-k8s-version-896003 crio[682]: time="2025-09-08 12:54:13.180556031Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=6a26acf2-2302-4103-a144-e032715fcb9e name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:54:23 old-k8s-version-896003 crio[682]: time="2025-09-08 12:54:23.180469554Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=7d0fea19-7657-4bfb-bca6-5877582075cc name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:54:23 old-k8s-version-896003 crio[682]: time="2025-09-08 12:54:23.180741779Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=7d0fea19-7657-4bfb-bca6-5877582075cc name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:54:28 old-k8s-version-896003 crio[682]: time="2025-09-08 12:54:28.180215180Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=28f1c783-da12-4424-a622-3af3c85a3b93 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:54:28 old-k8s-version-896003 crio[682]: time="2025-09-08 12:54:28.180564211Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=28f1c783-da12-4424-a622-3af3c85a3b93 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:54:35 old-k8s-version-896003 crio[682]: time="2025-09-08 12:54:35.179561619Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=3fee18b6-e51c-473f-80cb-b3ae11d80ea0 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:54:35 old-k8s-version-896003 crio[682]: time="2025-09-08 12:54:35.179863954Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=3fee18b6-e51c-473f-80cb-b3ae11d80ea0 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:54:40 old-k8s-version-896003 crio[682]: time="2025-09-08 12:54:40.180279982Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=73582bda-e567-45d5-affc-2c29aca095a8 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:54:40 old-k8s-version-896003 crio[682]: time="2025-09-08 12:54:40.180614089Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=73582bda-e567-45d5-affc-2c29aca095a8 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	b10c81397f98d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   About a minute ago   Exited              dashboard-metrics-scraper   8                   ffa1487a3cd2f       dashboard-metrics-scraper-5f989dc9cf-f4rk8
	9ade55b3edc9b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago       Running             storage-provisioner         2                   cda6e33963d05       storage-provisioner
	7c8de1f45f71b       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   18 minutes ago       Running             coredns                     1                   2bf6958df81d0       coredns-5dd5756b68-99vrp
	0704edfb12400       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c   18 minutes ago       Running             busybox                     1                   0af1ba0130ec7       busybox
	0c68b4da41592       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a   18 minutes ago       Running             kube-proxy                  1                   be167abb1bf28       kube-proxy-sptvq
	1c160ab3dfbcb       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   18 minutes ago       Running             kindnet-cni                 1                   5c9555d0f23a6       kindnet-bx9xt
	1bd62cd3b9358       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago       Exited              storage-provisioner         1                   cda6e33963d05       storage-provisioner
	765b8ead0b1e5       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   18 minutes ago       Running             etcd                        1                   f6dce38c48dec       etcd-old-k8s-version-896003
	88a0d1934bbfb       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157   18 minutes ago       Running             kube-scheduler              1                   03f7d3ad2afe1       kube-scheduler-old-k8s-version-896003
	686c36edfbd4c       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95   18 minutes ago       Running             kube-apiserver              1                   9aa1635509644       kube-apiserver-old-k8s-version-896003
	9bad1ae1ad18d       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62   18 minutes ago       Running             kube-controller-manager     1                   5ef8f1cec4fb6       kube-controller-manager-old-k8s-version-896003
	
	
	==> coredns [7c8de1f45f71bd48650af20abb5c2aa28a751d0ecaf303e41e6931cd0e115b0b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55804 - 62028 "HINFO IN 8521887911870039379.8223469923305335544. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.040339135s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-896003
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-896003
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a399eb27affc71ce2737faeeac659fc2ce938c64
	                    minikube.k8s.io/name=old-k8s-version-896003
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T12_34_58_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 12:34:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-896003
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 12:54:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 12:52:23 +0000   Mon, 08 Sep 2025 12:34:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 12:52:23 +0000   Mon, 08 Sep 2025 12:34:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 12:52:23 +0000   Mon, 08 Sep 2025 12:34:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 12:52:23 +0000   Mon, 08 Sep 2025 12:35:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-896003
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	System Info:
	  Machine ID:                 538c8e0f78b14b9e92ebf3d6eac1995d
	  System UUID:                4c49eb36-4aee-4dca-981a-0dd58e95d17d
	  Boot ID:                    1bb31c1a-3b78-4ea1-9977-d0689f279875
	  Kernel Version:             5.15.0-1083-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-5dd5756b68-99vrp                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     19m
	  kube-system                 etcd-old-k8s-version-896003                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         19m
	  kube-system                 kindnet-bx9xt                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-old-k8s-version-896003             250m (3%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-old-k8s-version-896003    200m (2%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-sptvq                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-old-k8s-version-896003             100m (1%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 metrics-server-57f55c9bc5-z5rkf                   100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         19m
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-f4rk8        0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-7dbrb             0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 18m                kube-proxy       
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node old-k8s-version-896003 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node old-k8s-version-896003 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m                kubelet          Node old-k8s-version-896003 status is now: NodeHasSufficientPID
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           19m                node-controller  Node old-k8s-version-896003 event: Registered Node old-k8s-version-896003 in Controller
	  Normal  NodeReady                19m                kubelet          Node old-k8s-version-896003 status is now: NodeReady
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node old-k8s-version-896003 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node old-k8s-version-896003 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x8 over 18m)  kubelet          Node old-k8s-version-896003 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18m                node-controller  Node old-k8s-version-896003 event: Registered Node old-k8s-version-896003 in Controller
	
	
	==> dmesg <==
	[  +0.000002] ll header: 00000000: e2 66 ae f3 69 01 3a b2 9c 4b bf 51 08 00
	[  +1.006042] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-aec4dfba5a81
	[  +0.000007] ll header: 00000000: e2 66 ae f3 69 01 3a b2 9c 4b bf 51 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-aec4dfba5a81
	[  +0.000002] ll header: 00000000: e2 66 ae f3 69 01 3a b2 9c 4b bf 51 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-aec4dfba5a81
	[  +0.000001] ll header: 00000000: e2 66 ae f3 69 01 3a b2 9c 4b bf 51 08 00
	[  +2.015807] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-aec4dfba5a81
	[  +0.000007] ll header: 00000000: e2 66 ae f3 69 01 3a b2 9c 4b bf 51 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-aec4dfba5a81
	[  +0.000003] ll header: 00000000: e2 66 ae f3 69 01 3a b2 9c 4b bf 51 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-aec4dfba5a81
	[  +0.000001] ll header: 00000000: e2 66 ae f3 69 01 3a b2 9c 4b bf 51 08 00
	[  +4.251670] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-aec4dfba5a81
	[  +0.000007] ll header: 00000000: e2 66 ae f3 69 01 3a b2 9c 4b bf 51 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-aec4dfba5a81
	[  +0.000002] ll header: 00000000: e2 66 ae f3 69 01 3a b2 9c 4b bf 51 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-aec4dfba5a81
	[  +0.000001] ll header: 00000000: e2 66 ae f3 69 01 3a b2 9c 4b bf 51 08 00
	[  +8.195202] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-aec4dfba5a81
	[  +0.000006] ll header: 00000000: e2 66 ae f3 69 01 3a b2 9c 4b bf 51 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-aec4dfba5a81
	[  +0.000002] ll header: 00000000: e2 66 ae f3 69 01 3a b2 9c 4b bf 51 08 00
	[  +0.000008] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-aec4dfba5a81
	[  +0.000001] ll header: 00000000: e2 66 ae f3 69 01 3a b2 9c 4b bf 51 08 00
	
	
	==> etcd [765b8ead0b1e57d46872ecd1fe55b1d74c03da4a25a08595dd8d85d10231825b] <==
	{"level":"info","ts":"2025-09-08T12:35:59.595064Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-09-08T12:35:59.595208Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-09-08T12:35:59.595447Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-09-08T12:35:59.595496Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-09-08T12:35:59.59556Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-09-08T12:36:00.695339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-09-08T12:36:00.695416Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-09-08T12:36:00.695451Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-09-08T12:36:00.695464Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-09-08T12:36:00.69547Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-09-08T12:36:00.695479Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-09-08T12:36:00.695487Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-09-08T12:36:00.698152Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-08T12:36:00.698173Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-08T12:36:00.698164Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-896003 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-09-08T12:36:00.69849Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-09-08T12:36:00.698589Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-09-08T12:36:00.699604Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-09-08T12:36:00.699727Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-09-08T12:46:00.718143Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":949}
	{"level":"info","ts":"2025-09-08T12:46:00.719939Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":949,"took":"1.50877ms","hash":1404126764}
	{"level":"info","ts":"2025-09-08T12:46:00.71998Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1404126764,"revision":949,"compact-revision":-1}
	{"level":"info","ts":"2025-09-08T12:51:00.722919Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1200}
	{"level":"info","ts":"2025-09-08T12:51:00.724136Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1200,"took":"886.177µs","hash":971145821}
	{"level":"info","ts":"2025-09-08T12:51:00.724176Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":971145821,"revision":1200,"compact-revision":949}
	
	
	==> kernel <==
	 12:54:42 up  3:37,  0 users,  load average: 1.13, 1.13, 1.46
	Linux old-k8s-version-896003 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [1c160ab3dfbcb6f27d27c42821d725d0836924c6d21d22cdb9cccd0a8f308e99] <==
	I0908 12:52:34.987813       1 main.go:301] handling current node
	I0908 12:52:44.985386       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 12:52:44.985431       1 main.go:301] handling current node
	I0908 12:52:54.987817       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 12:52:54.987863       1 main.go:301] handling current node
	I0908 12:53:04.988960       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 12:53:04.988996       1 main.go:301] handling current node
	I0908 12:53:14.985227       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 12:53:14.985274       1 main.go:301] handling current node
	I0908 12:53:24.991760       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 12:53:24.991799       1 main.go:301] handling current node
	I0908 12:53:34.987767       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 12:53:34.987832       1 main.go:301] handling current node
	I0908 12:53:44.985755       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 12:53:44.985795       1 main.go:301] handling current node
	I0908 12:53:54.991808       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 12:53:54.991850       1 main.go:301] handling current node
	I0908 12:54:04.989035       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 12:54:04.989076       1 main.go:301] handling current node
	I0908 12:54:14.985268       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 12:54:14.985314       1 main.go:301] handling current node
	I0908 12:54:24.991787       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 12:54:24.991831       1 main.go:301] handling current node
	I0908 12:54:34.990087       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0908 12:54:34.990146       1 main.go:301] handling current node
	
	
	==> kube-apiserver [686c36edfbd4c19d3aedb2cf3c30545af99cf261f70a6e93a943cf7b7b113a52] <==
	I0908 12:51:03.498788       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0908 12:51:03.498850       1 handler_proxy.go:93] no RequestInfo found in the context
	E0908 12:51:03.498933       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0908 12:51:03.500108       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0908 12:52:02.292604       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.103.220.17:443: connect: connection refused
	I0908 12:52:02.292636       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0908 12:52:03.499893       1 handler_proxy.go:93] no RequestInfo found in the context
	E0908 12:52:03.499939       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0908 12:52:03.499946       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0908 12:52:03.501033       1 handler_proxy.go:93] no RequestInfo found in the context
	E0908 12:52:03.501116       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0908 12:52:03.501124       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0908 12:53:02.293340       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.103.220.17:443: connect: connection refused
	I0908 12:53:02.293369       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0908 12:54:02.292431       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.103.220.17:443: connect: connection refused
	I0908 12:54:02.292463       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0908 12:54:03.500146       1 handler_proxy.go:93] no RequestInfo found in the context
	E0908 12:54:03.500212       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0908 12:54:03.500224       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0908 12:54:03.502359       1 handler_proxy.go:93] no RequestInfo found in the context
	E0908 12:54:03.502447       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0908 12:54:03.502458       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [9bad1ae1ad18d77f1332b137f3066d0ac9c00dc3716df3e03d8ba5389cf02778] <==
	I0908 12:49:46.609920       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0908 12:50:16.063264       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0908 12:50:16.617897       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0908 12:50:46.067717       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0908 12:50:46.625412       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0908 12:50:53.190708       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="129.453µs"
	I0908 12:51:04.190142       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="131.548µs"
	E0908 12:51:16.073389       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0908 12:51:16.633204       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0908 12:51:46.079155       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0908 12:51:46.640598       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0908 12:52:16.084649       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0908 12:52:16.648482       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0908 12:52:46.089773       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0908 12:52:46.656749       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0908 12:52:48.465674       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="115.172µs"
	I0908 12:52:56.257059       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="143.97µs"
	I0908 12:52:59.190381       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="121.647µs"
	I0908 12:53:14.192191       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="121.088µs"
	E0908 12:53:16.095754       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0908 12:53:16.665707       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0908 12:53:46.100845       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0908 12:53:46.673730       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0908 12:54:16.105865       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0908 12:54:16.682385       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [0c68b4da41592ce102d0e714b7c537d8cad9cfb5f1437c23eafc3286d0018350] <==
	I0908 12:36:04.894223       1 server_others.go:69] "Using iptables proxy"
	I0908 12:36:04.904040       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I0908 12:36:04.991644       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 12:36:04.994927       1 server_others.go:152] "Using iptables Proxier"
	I0908 12:36:04.994980       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0908 12:36:04.994991       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0908 12:36:04.995030       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0908 12:36:04.995285       1 server.go:846] "Version info" version="v1.28.0"
	I0908 12:36:04.995310       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 12:36:04.996234       1 config.go:188] "Starting service config controller"
	I0908 12:36:04.996313       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0908 12:36:04.996351       1 config.go:315] "Starting node config controller"
	I0908 12:36:04.996356       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0908 12:36:04.996764       1 config.go:97] "Starting endpoint slice config controller"
	I0908 12:36:04.997489       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0908 12:36:05.096889       1 shared_informer.go:318] Caches are synced for node config
	I0908 12:36:05.096916       1 shared_informer.go:318] Caches are synced for service config
	I0908 12:36:05.097583       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [88a0d1934bbfb7a8b2abe2c0924c6955fe6827025981401e413cc0fbb6ad8ac8] <==
	I0908 12:36:00.678196       1 serving.go:348] Generated self-signed cert in-memory
	W0908 12:36:02.402767       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0908 12:36:02.402903       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0908 12:36:02.402924       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0908 12:36:02.402933       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0908 12:36:02.497485       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I0908 12:36:02.497608       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 12:36:02.499434       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 12:36:02.499532       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0908 12:36:02.501293       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0908 12:36:02.501381       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0908 12:36:02.599869       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 08 12:53:22 old-k8s-version-896003 kubelet[829]: E0908 12:53:22.181232     829 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7dbrb" podUID="4704cc58-09c7-49d5-a649-d7e9fd6c1297"
	Sep 08 12:53:28 old-k8s-version-896003 kubelet[829]: E0908 12:53:28.180523     829 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-z5rkf" podUID="e61b86c4-79f9-424d-99b1-7f532948e25a"
	Sep 08 12:53:31 old-k8s-version-896003 kubelet[829]: I0908 12:53:31.179031     829 scope.go:117] "RemoveContainer" containerID="b10c81397f98d1fa52082d1e1a342b14f6b9acb6f1de8daac6441f041fb56027"
	Sep 08 12:53:31 old-k8s-version-896003 kubelet[829]: E0908 12:53:31.179472     829 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f4rk8_kubernetes-dashboard(835b1d48-6be4-48ad-b1c3-c24a581d31d5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f4rk8" podUID="835b1d48-6be4-48ad-b1c3-c24a581d31d5"
	Sep 08 12:53:35 old-k8s-version-896003 kubelet[829]: E0908 12:53:35.180908     829 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7dbrb" podUID="4704cc58-09c7-49d5-a649-d7e9fd6c1297"
	Sep 08 12:53:43 old-k8s-version-896003 kubelet[829]: E0908 12:53:43.181280     829 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-z5rkf" podUID="e61b86c4-79f9-424d-99b1-7f532948e25a"
	Sep 08 12:53:44 old-k8s-version-896003 kubelet[829]: I0908 12:53:44.179913     829 scope.go:117] "RemoveContainer" containerID="b10c81397f98d1fa52082d1e1a342b14f6b9acb6f1de8daac6441f041fb56027"
	Sep 08 12:53:44 old-k8s-version-896003 kubelet[829]: E0908 12:53:44.180320     829 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f4rk8_kubernetes-dashboard(835b1d48-6be4-48ad-b1c3-c24a581d31d5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f4rk8" podUID="835b1d48-6be4-48ad-b1c3-c24a581d31d5"
	Sep 08 12:53:48 old-k8s-version-896003 kubelet[829]: E0908 12:53:48.180968     829 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7dbrb" podUID="4704cc58-09c7-49d5-a649-d7e9fd6c1297"
	Sep 08 12:53:56 old-k8s-version-896003 kubelet[829]: I0908 12:53:56.179337     829 scope.go:117] "RemoveContainer" containerID="b10c81397f98d1fa52082d1e1a342b14f6b9acb6f1de8daac6441f041fb56027"
	Sep 08 12:53:56 old-k8s-version-896003 kubelet[829]: E0908 12:53:56.179757     829 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f4rk8_kubernetes-dashboard(835b1d48-6be4-48ad-b1c3-c24a581d31d5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f4rk8" podUID="835b1d48-6be4-48ad-b1c3-c24a581d31d5"
	Sep 08 12:53:58 old-k8s-version-896003 kubelet[829]: E0908 12:53:58.180590     829 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-z5rkf" podUID="e61b86c4-79f9-424d-99b1-7f532948e25a"
	Sep 08 12:54:00 old-k8s-version-896003 kubelet[829]: E0908 12:54:00.180607     829 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7dbrb" podUID="4704cc58-09c7-49d5-a649-d7e9fd6c1297"
	Sep 08 12:54:10 old-k8s-version-896003 kubelet[829]: E0908 12:54:10.180816     829 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-z5rkf" podUID="e61b86c4-79f9-424d-99b1-7f532948e25a"
	Sep 08 12:54:11 old-k8s-version-896003 kubelet[829]: I0908 12:54:11.179882     829 scope.go:117] "RemoveContainer" containerID="b10c81397f98d1fa52082d1e1a342b14f6b9acb6f1de8daac6441f041fb56027"
	Sep 08 12:54:11 old-k8s-version-896003 kubelet[829]: E0908 12:54:11.180268     829 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f4rk8_kubernetes-dashboard(835b1d48-6be4-48ad-b1c3-c24a581d31d5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f4rk8" podUID="835b1d48-6be4-48ad-b1c3-c24a581d31d5"
	Sep 08 12:54:13 old-k8s-version-896003 kubelet[829]: E0908 12:54:13.180917     829 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7dbrb" podUID="4704cc58-09c7-49d5-a649-d7e9fd6c1297"
	Sep 08 12:54:23 old-k8s-version-896003 kubelet[829]: E0908 12:54:23.181113     829 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-z5rkf" podUID="e61b86c4-79f9-424d-99b1-7f532948e25a"
	Sep 08 12:54:26 old-k8s-version-896003 kubelet[829]: I0908 12:54:26.179069     829 scope.go:117] "RemoveContainer" containerID="b10c81397f98d1fa52082d1e1a342b14f6b9acb6f1de8daac6441f041fb56027"
	Sep 08 12:54:26 old-k8s-version-896003 kubelet[829]: E0908 12:54:26.179459     829 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f4rk8_kubernetes-dashboard(835b1d48-6be4-48ad-b1c3-c24a581d31d5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f4rk8" podUID="835b1d48-6be4-48ad-b1c3-c24a581d31d5"
	Sep 08 12:54:28 old-k8s-version-896003 kubelet[829]: E0908 12:54:28.180849     829 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7dbrb" podUID="4704cc58-09c7-49d5-a649-d7e9fd6c1297"
	Sep 08 12:54:35 old-k8s-version-896003 kubelet[829]: E0908 12:54:35.180227     829 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-z5rkf" podUID="e61b86c4-79f9-424d-99b1-7f532948e25a"
	Sep 08 12:54:38 old-k8s-version-896003 kubelet[829]: I0908 12:54:38.179338     829 scope.go:117] "RemoveContainer" containerID="b10c81397f98d1fa52082d1e1a342b14f6b9acb6f1de8daac6441f041fb56027"
	Sep 08 12:54:38 old-k8s-version-896003 kubelet[829]: E0908 12:54:38.179715     829 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f4rk8_kubernetes-dashboard(835b1d48-6be4-48ad-b1c3-c24a581d31d5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f4rk8" podUID="835b1d48-6be4-48ad-b1c3-c24a581d31d5"
	Sep 08 12:54:40 old-k8s-version-896003 kubelet[829]: E0908 12:54:40.180905     829 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7dbrb" podUID="4704cc58-09c7-49d5-a649-d7e9fd6c1297"
	
	
	==> storage-provisioner [1bd62cd3b9358589d46a2c7f83c0c54f1db5ed8e60b9e29d91bac001cbe526f0] <==
	I0908 12:36:04.393115       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0908 12:36:34.398346       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [9ade55b3edc9b61d1c8fbfa6355ede882ed6cd8a05cffb6843f0fd0bf3141da1] <==
	I0908 12:36:35.441224       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0908 12:36:35.451066       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0908 12:36:35.451115       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0908 12:36:52.856950       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0908 12:36:52.857028       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c064b846-3cb1-463b-aa29-6c180848f227", APIVersion:"v1", ResourceVersion:"672", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-896003_be92447a-a74a-4547-9621-02ef7c155c81 became leader
	I0908 12:36:52.857156       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-896003_be92447a-a74a-4547-9621-02ef7c155c81!
	I0908 12:36:52.957766       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-896003_be92447a-a74a-4547-9621-02ef7c155c81!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-896003 -n old-k8s-version-896003
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-896003 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-57f55c9bc5-z5rkf kubernetes-dashboard-8694d4445c-7dbrb
helpers_test.go:282: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context old-k8s-version-896003 describe pod metrics-server-57f55c9bc5-z5rkf kubernetes-dashboard-8694d4445c-7dbrb
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context old-k8s-version-896003 describe pod metrics-server-57f55c9bc5-z5rkf kubernetes-dashboard-8694d4445c-7dbrb: exit status 1 (65.299555ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-z5rkf" not found
	Error from server (NotFound): pods "kubernetes-dashboard-8694d4445c-7dbrb" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context old-k8s-version-896003 describe pod metrics-server-57f55c9bc5-z5rkf kubernetes-dashboard-8694d4445c-7dbrb: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (542.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (542.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-vwd7n" [537a29c5-ffc1-49e3-8a70-737656b3a999] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0908 12:46:09.413787  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/kindnet-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:46:24.585701  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/functional-982703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:285: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-997730 -n no-preload-997730
start_stop_delete_test.go:285: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-09-08 12:54:57.272677724 +0000 UTC m=+4916.565420202
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context no-preload-997730 describe po kubernetes-dashboard-855c9754f9-vwd7n -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) kubectl --context no-preload-997730 describe po kubernetes-dashboard-855c9754f9-vwd7n -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-vwd7n
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             no-preload-997730/192.168.94.2
Start Time:       Mon, 08 Sep 2025 12:36:19 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ln5gl (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-ln5gl:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  18m                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vwd7n to no-preload-997730
Normal   Pulling    13m (x5 over 18m)     kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     13m (x5 over 18m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     13m (x5 over 18m)     kubelet            Error: ErrImagePull
Normal   BackOff    3m26s (x48 over 18m)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     3m (x50 over 18m)     kubelet            Error: ImagePullBackOff
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context no-preload-997730 logs kubernetes-dashboard-855c9754f9-vwd7n -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context no-preload-997730 logs kubernetes-dashboard-855c9754f9-vwd7n -n kubernetes-dashboard: exit status 1 (80.387829ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-vwd7n" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:285: kubectl --context no-preload-997730 logs kubernetes-dashboard-855c9754f9-vwd7n -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-997730 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-997730
helpers_test.go:243: (dbg) docker inspect no-preload-997730:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b7f8cab201e63b95b844710cef08df1a17b03f2b8714f96097cea6422e7151fb",
	        "Created": "2025-09-08T12:34:43.172154041Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 938893,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-08T12:36:02.231771503Z",
	            "FinishedAt": "2025-09-08T12:36:01.350027288Z"
	        },
	        "Image": "sha256:863fa02c4a7dcd4571b30c16c1e6ae3eaa1d1a904931aac9472133ae3328c098",
	        "ResolvConfPath": "/var/lib/docker/containers/b7f8cab201e63b95b844710cef08df1a17b03f2b8714f96097cea6422e7151fb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b7f8cab201e63b95b844710cef08df1a17b03f2b8714f96097cea6422e7151fb/hostname",
	        "HostsPath": "/var/lib/docker/containers/b7f8cab201e63b95b844710cef08df1a17b03f2b8714f96097cea6422e7151fb/hosts",
	        "LogPath": "/var/lib/docker/containers/b7f8cab201e63b95b844710cef08df1a17b03f2b8714f96097cea6422e7151fb/b7f8cab201e63b95b844710cef08df1a17b03f2b8714f96097cea6422e7151fb-json.log",
	        "Name": "/no-preload-997730",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-997730:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-997730",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b7f8cab201e63b95b844710cef08df1a17b03f2b8714f96097cea6422e7151fb",
	                "LowerDir": "/var/lib/docker/overlay2/e34fbca8234696e511e5798b67fe0066e07e799a01b931170e8f33aea364697f-init/diff:/var/lib/docker/overlay2/e9bf8e09f770f60ed4c810e0202629dccf6a4c304822cce5a8dffd900ae12eff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e34fbca8234696e511e5798b67fe0066e07e799a01b931170e8f33aea364697f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e34fbca8234696e511e5798b67fe0066e07e799a01b931170e8f33aea364697f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e34fbca8234696e511e5798b67fe0066e07e799a01b931170e8f33aea364697f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-997730",
	                "Source": "/var/lib/docker/volumes/no-preload-997730/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-997730",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-997730",
	                "name.minikube.sigs.k8s.io": "no-preload-997730",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a8d9936158e25c459552b9ad67c50d23cf1343416009b9731b034684b1af9d78",
	            "SandboxKey": "/var/run/docker/netns/a8d9936158e2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33478"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33479"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33482"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33480"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33481"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-997730": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:11:f7:fb:5b:36",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "34c70443944eae5e5bf18c5289547ab218c10406c1c1860d95139a16069c0d1e",
	                    "EndpointID": "7495518a83442d35741d2e0362b38a949419c13b26b958566d9ac3fe24c8edf8",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-997730",
	                        "b7f8cab201e6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-997730 -n no-preload-997730
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-997730 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-997730 logs -n 25: (1.272357972s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p calico-283124 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ calico-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:46 UTC │ 08 Sep 25 12:46 UTC │
	│ ssh     │ -p calico-283124 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ calico-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:46 UTC │ 08 Sep 25 12:46 UTC │
	│ ssh     │ -p calico-283124 sudo crio config                                                                                                                                                                                                             │ calico-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:46 UTC │ 08 Sep 25 12:46 UTC │
	│ delete  │ -p calico-283124                                                                                                                                                                                                                              │ calico-283124                │ jenkins │ v1.36.0 │ 08 Sep 25 12:46 UTC │ 08 Sep 25 12:46 UTC │
	│ start   │ -p newest-cni-139998 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0 │ newest-cni-139998            │ jenkins │ v1.36.0 │ 08 Sep 25 12:46 UTC │ 08 Sep 25 12:47 UTC │
	│ addons  │ enable metrics-server -p newest-cni-139998 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-139998            │ jenkins │ v1.36.0 │ 08 Sep 25 12:47 UTC │ 08 Sep 25 12:47 UTC │
	│ stop    │ -p newest-cni-139998 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-139998            │ jenkins │ v1.36.0 │ 08 Sep 25 12:47 UTC │ 08 Sep 25 12:47 UTC │
	│ addons  │ enable dashboard -p newest-cni-139998 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-139998            │ jenkins │ v1.36.0 │ 08 Sep 25 12:47 UTC │ 08 Sep 25 12:47 UTC │
	│ start   │ -p newest-cni-139998 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0 │ newest-cni-139998            │ jenkins │ v1.36.0 │ 08 Sep 25 12:47 UTC │ 08 Sep 25 12:47 UTC │
	│ image   │ newest-cni-139998 image list --format=json                                                                                                                                                                                                    │ newest-cni-139998            │ jenkins │ v1.36.0 │ 08 Sep 25 12:47 UTC │ 08 Sep 25 12:47 UTC │
	│ pause   │ -p newest-cni-139998 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-139998            │ jenkins │ v1.36.0 │ 08 Sep 25 12:47 UTC │ 08 Sep 25 12:47 UTC │
	│ unpause │ -p newest-cni-139998 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-139998            │ jenkins │ v1.36.0 │ 08 Sep 25 12:47 UTC │ 08 Sep 25 12:47 UTC │
	│ delete  │ -p newest-cni-139998                                                                                                                                                                                                                          │ newest-cni-139998            │ jenkins │ v1.36.0 │ 08 Sep 25 12:47 UTC │ 08 Sep 25 12:47 UTC │
	│ delete  │ -p newest-cni-139998                                                                                                                                                                                                                          │ newest-cni-139998            │ jenkins │ v1.36.0 │ 08 Sep 25 12:47 UTC │ 08 Sep 25 12:47 UTC │
	│ delete  │ -p disable-driver-mounts-173021                                                                                                                                                                                                               │ disable-driver-mounts-173021 │ jenkins │ v1.36.0 │ 08 Sep 25 12:47 UTC │ 08 Sep 25 12:47 UTC │
	│ start   │ -p embed-certs-095356 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                        │ embed-certs-095356           │ jenkins │ v1.36.0 │ 08 Sep 25 12:47 UTC │ 08 Sep 25 12:49 UTC │
	│ addons  │ enable metrics-server -p embed-certs-095356 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-095356           │ jenkins │ v1.36.0 │ 08 Sep 25 12:49 UTC │ 08 Sep 25 12:49 UTC │
	│ stop    │ -p embed-certs-095356 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-095356           │ jenkins │ v1.36.0 │ 08 Sep 25 12:49 UTC │ 08 Sep 25 12:49 UTC │
	│ addons  │ enable dashboard -p embed-certs-095356 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-095356           │ jenkins │ v1.36.0 │ 08 Sep 25 12:49 UTC │ 08 Sep 25 12:49 UTC │
	│ start   │ -p embed-certs-095356 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                        │ embed-certs-095356           │ jenkins │ v1.36.0 │ 08 Sep 25 12:49 UTC │ 08 Sep 25 12:50 UTC │
	│ image   │ old-k8s-version-896003 image list --format=json                                                                                                                                                                                               │ old-k8s-version-896003       │ jenkins │ v1.36.0 │ 08 Sep 25 12:54 UTC │ 08 Sep 25 12:54 UTC │
	│ pause   │ -p old-k8s-version-896003 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-896003       │ jenkins │ v1.36.0 │ 08 Sep 25 12:54 UTC │ 08 Sep 25 12:54 UTC │
	│ unpause │ -p old-k8s-version-896003 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-896003       │ jenkins │ v1.36.0 │ 08 Sep 25 12:54 UTC │ 08 Sep 25 12:54 UTC │
	│ delete  │ -p old-k8s-version-896003                                                                                                                                                                                                                     │ old-k8s-version-896003       │ jenkins │ v1.36.0 │ 08 Sep 25 12:54 UTC │ 08 Sep 25 12:54 UTC │
	│ delete  │ -p old-k8s-version-896003                                                                                                                                                                                                                     │ old-k8s-version-896003       │ jenkins │ v1.36.0 │ 08 Sep 25 12:54 UTC │ 08 Sep 25 12:54 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 12:49:23
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 12:49:23.279400  968017 out.go:360] Setting OutFile to fd 1 ...
	I0908 12:49:23.279807  968017 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:49:23.279824  968017 out.go:374] Setting ErrFile to fd 2...
	I0908 12:49:23.279829  968017 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:49:23.280064  968017 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-614854/.minikube/bin
	I0908 12:49:23.280789  968017 out.go:368] Setting JSON to false
	I0908 12:49:23.282282  968017 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":12707,"bootTime":1757323056,"procs":271,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 12:49:23.282415  968017 start.go:140] virtualization: kvm guest
	I0908 12:49:23.284711  968017 out.go:179] * [embed-certs-095356] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 12:49:23.286739  968017 out.go:179]   - MINIKUBE_LOCATION=21512
	I0908 12:49:23.286750  968017 notify.go:220] Checking for updates...
	I0908 12:49:23.289669  968017 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 12:49:23.291064  968017 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21512-614854/kubeconfig
	I0908 12:49:23.292333  968017 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-614854/.minikube
	I0908 12:49:23.293647  968017 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 12:49:23.295067  968017 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 12:49:23.296896  968017 config.go:182] Loaded profile config "embed-certs-095356": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 12:49:23.297523  968017 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 12:49:23.323231  968017 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 12:49:23.323393  968017 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:49:23.377796  968017 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:true NGoroutines:73 SystemTime:2025-09-08 12:49:23.367734602 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 12:49:23.377913  968017 docker.go:318] overlay module found
	I0908 12:49:23.379836  968017 out.go:179] * Using the docker driver based on existing profile
	I0908 12:49:23.381063  968017 start.go:304] selected driver: docker
	I0908 12:49:23.381087  968017 start.go:918] validating driver "docker" against &{Name:embed-certs-095356 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:embed-certs-095356 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:49:23.381212  968017 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 12:49:23.382437  968017 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:49:23.441035  968017 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:true NGoroutines:73 SystemTime:2025-09-08 12:49:23.430451531 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 12:49:23.441421  968017 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 12:49:23.441475  968017 cni.go:84] Creating CNI manager for ""
	I0908 12:49:23.441548  968017 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 12:49:23.441616  968017 start.go:348] cluster config:
	{Name:embed-certs-095356 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:embed-certs-095356 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:49:23.444524  968017 out.go:179] * Starting "embed-certs-095356" primary control-plane node in "embed-certs-095356" cluster
	I0908 12:49:23.446148  968017 cache.go:123] Beginning downloading kic base image for docker with crio
	I0908 12:49:23.447633  968017 out.go:179] * Pulling base image v0.0.47-1756980985-21488 ...
	I0908 12:49:23.448890  968017 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 12:49:23.448967  968017 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21512-614854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0908 12:49:23.448984  968017 cache.go:58] Caching tarball of preloaded images
	I0908 12:49:23.449045  968017 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 12:49:23.449154  968017 preload.go:172] Found /home/jenkins/minikube-integration/21512-614854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0908 12:49:23.449170  968017 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0908 12:49:23.449314  968017 profile.go:143] Saving config to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/embed-certs-095356/config.json ...
	I0908 12:49:23.470704  968017 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon, skipping pull
	I0908 12:49:23.470727  968017 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in daemon, skipping load
	I0908 12:49:23.470746  968017 cache.go:232] Successfully downloaded all kic artifacts
	I0908 12:49:23.470778  968017 start.go:360] acquireMachinesLock for embed-certs-095356: {Name:mk9355040c36d7eff54da75f6473007cb8502c78 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 12:49:23.470872  968017 start.go:364] duration metric: took 46.58µs to acquireMachinesLock for "embed-certs-095356"
	I0908 12:49:23.470895  968017 start.go:96] Skipping create...Using existing machine configuration
	I0908 12:49:23.470902  968017 fix.go:54] fixHost starting: 
	I0908 12:49:23.471117  968017 cli_runner.go:164] Run: docker container inspect embed-certs-095356 --format={{.State.Status}}
	I0908 12:49:23.490230  968017 fix.go:112] recreateIfNeeded on embed-certs-095356: state=Stopped err=<nil>
	W0908 12:49:23.490302  968017 fix.go:138] unexpected machine state, will restart: <nil>
	I0908 12:49:23.492246  968017 out.go:252] * Restarting existing docker container for "embed-certs-095356" ...
	I0908 12:49:23.492346  968017 cli_runner.go:164] Run: docker start embed-certs-095356
	I0908 12:49:23.750403  968017 cli_runner.go:164] Run: docker container inspect embed-certs-095356 --format={{.State.Status}}
	I0908 12:49:23.769795  968017 kic.go:430] container "embed-certs-095356" state is running.
	I0908 12:49:23.770316  968017 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-095356
	I0908 12:49:23.790284  968017 profile.go:143] Saving config to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/embed-certs-095356/config.json ...
	I0908 12:49:23.790565  968017 machine.go:93] provisionDockerMachine start ...
	I0908 12:49:23.790652  968017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-095356
	I0908 12:49:23.813467  968017 main.go:141] libmachine: Using SSH client type: native
	I0908 12:49:23.813785  968017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33503 <nil> <nil>}
	I0908 12:49:23.813800  968017 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 12:49:23.814519  968017 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48384->127.0.0.1:33503: read: connection reset by peer
	I0908 12:49:26.939977  968017 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-095356
	
	I0908 12:49:26.940015  968017 ubuntu.go:182] provisioning hostname "embed-certs-095356"
	I0908 12:49:26.940103  968017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-095356
	I0908 12:49:26.960115  968017 main.go:141] libmachine: Using SSH client type: native
	I0908 12:49:26.960359  968017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33503 <nil> <nil>}
	I0908 12:49:26.960375  968017 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-095356 && echo "embed-certs-095356" | sudo tee /etc/hostname
	I0908 12:49:27.098216  968017 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-095356
	
	I0908 12:49:27.098349  968017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-095356
	I0908 12:49:27.119969  968017 main.go:141] libmachine: Using SSH client type: native
	I0908 12:49:27.120236  968017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33503 <nil> <nil>}
	I0908 12:49:27.120258  968017 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-095356' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-095356/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-095356' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 12:49:27.244836  968017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 12:49:27.244884  968017 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21512-614854/.minikube CaCertPath:/home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21512-614854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21512-614854/.minikube}
	I0908 12:49:27.244919  968017 ubuntu.go:190] setting up certificates
	I0908 12:49:27.244946  968017 provision.go:84] configureAuth start
	I0908 12:49:27.245061  968017 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-095356
	I0908 12:49:27.264701  968017 provision.go:143] copyHostCerts
	I0908 12:49:27.264782  968017 exec_runner.go:144] found /home/jenkins/minikube-integration/21512-614854/.minikube/key.pem, removing ...
	I0908 12:49:27.264800  968017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21512-614854/.minikube/key.pem
	I0908 12:49:27.264866  968017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21512-614854/.minikube/key.pem (1675 bytes)
	I0908 12:49:27.264984  968017 exec_runner.go:144] found /home/jenkins/minikube-integration/21512-614854/.minikube/ca.pem, removing ...
	I0908 12:49:27.264995  968017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21512-614854/.minikube/ca.pem
	I0908 12:49:27.265021  968017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21512-614854/.minikube/ca.pem (1082 bytes)
	I0908 12:49:27.265070  968017 exec_runner.go:144] found /home/jenkins/minikube-integration/21512-614854/.minikube/cert.pem, removing ...
	I0908 12:49:27.265077  968017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21512-614854/.minikube/cert.pem
	I0908 12:49:27.265098  968017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21512-614854/.minikube/cert.pem (1123 bytes)
	I0908 12:49:27.265147  968017 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21512-614854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca-key.pem org=jenkins.embed-certs-095356 san=[127.0.0.1 192.168.76.2 embed-certs-095356 localhost minikube]
	I0908 12:49:27.478954  968017 provision.go:177] copyRemoteCerts
	I0908 12:49:27.479034  968017 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 12:49:27.479072  968017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-095356
	I0908 12:49:27.497777  968017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/embed-certs-095356/id_rsa Username:docker}
	I0908 12:49:27.594551  968017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 12:49:27.622480  968017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0908 12:49:27.650190  968017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0908 12:49:27.677556  968017 provision.go:87] duration metric: took 432.588736ms to configureAuth
	I0908 12:49:27.677589  968017 ubuntu.go:206] setting minikube options for container-runtime
	I0908 12:49:27.677815  968017 config.go:182] Loaded profile config "embed-certs-095356": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 12:49:27.677938  968017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-095356
	I0908 12:49:27.698245  968017 main.go:141] libmachine: Using SSH client type: native
	I0908 12:49:27.698549  968017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33503 <nil> <nil>}
	I0908 12:49:27.698567  968017 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0908 12:49:28.026101  968017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0908 12:49:28.026153  968017 machine.go:96] duration metric: took 4.235551829s to provisionDockerMachine
	I0908 12:49:28.026167  968017 start.go:293] postStartSetup for "embed-certs-095356" (driver="docker")
	I0908 12:49:28.026181  968017 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 12:49:28.026243  968017 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 12:49:28.026301  968017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-095356
	I0908 12:49:28.047864  968017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/embed-certs-095356/id_rsa Username:docker}
	I0908 12:49:28.141987  968017 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 12:49:28.146300  968017 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0908 12:49:28.146346  968017 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0908 12:49:28.146356  968017 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0908 12:49:28.146366  968017 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0908 12:49:28.146382  968017 filesync.go:126] Scanning /home/jenkins/minikube-integration/21512-614854/.minikube/addons for local assets ...
	I0908 12:49:28.146446  968017 filesync.go:126] Scanning /home/jenkins/minikube-integration/21512-614854/.minikube/files for local assets ...
	I0908 12:49:28.146562  968017 filesync.go:149] local asset: /home/jenkins/minikube-integration/21512-614854/.minikube/files/etc/ssl/certs/6186202.pem -> 6186202.pem in /etc/ssl/certs
	I0908 12:49:28.146690  968017 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 12:49:28.157179  968017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/files/etc/ssl/certs/6186202.pem --> /etc/ssl/certs/6186202.pem (1708 bytes)
	I0908 12:49:28.186965  968017 start.go:296] duration metric: took 160.778964ms for postStartSetup
	I0908 12:49:28.187059  968017 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 12:49:28.187106  968017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-095356
	I0908 12:49:28.206758  968017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/embed-certs-095356/id_rsa Username:docker}
	I0908 12:49:28.293425  968017 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0908 12:49:28.298894  968017 fix.go:56] duration metric: took 4.827979324s for fixHost
	I0908 12:49:28.298928  968017 start.go:83] releasing machines lock for "embed-certs-095356", held for 4.828041707s
	I0908 12:49:28.298991  968017 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-095356
	I0908 12:49:28.319080  968017 ssh_runner.go:195] Run: cat /version.json
	I0908 12:49:28.319159  968017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-095356
	I0908 12:49:28.319190  968017 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 12:49:28.319261  968017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-095356
	I0908 12:49:28.340864  968017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/embed-certs-095356/id_rsa Username:docker}
	I0908 12:49:28.342188  968017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/embed-certs-095356/id_rsa Username:docker}
	I0908 12:49:28.428053  968017 ssh_runner.go:195] Run: systemctl --version
	I0908 12:49:28.501265  968017 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0908 12:49:28.645284  968017 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0908 12:49:28.650558  968017 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 12:49:28.659998  968017 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0908 12:49:28.660080  968017 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 12:49:28.669203  968017 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0908 12:49:28.669230  968017 start.go:495] detecting cgroup driver to use...
	I0908 12:49:28.669266  968017 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0908 12:49:28.669311  968017 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 12:49:28.681994  968017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 12:49:28.695114  968017 docker.go:218] disabling cri-docker service (if available) ...
	I0908 12:49:28.695194  968017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 12:49:28.708625  968017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 12:49:28.720641  968017 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 12:49:28.798301  968017 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 12:49:28.880045  968017 docker.go:234] disabling docker service ...
	I0908 12:49:28.880123  968017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 12:49:28.892469  968017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 12:49:28.903906  968017 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 12:49:28.991744  968017 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 12:49:29.072520  968017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 12:49:29.086635  968017 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 12:49:29.104777  968017 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0908 12:49:29.104847  968017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:49:29.115495  968017 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0908 12:49:29.115587  968017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:49:29.126120  968017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:49:29.136593  968017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:49:29.148026  968017 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 12:49:29.157553  968017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:49:29.168412  968017 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:49:29.178655  968017 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:49:29.189820  968017 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 12:49:29.198879  968017 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 12:49:29.208182  968017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:49:29.290790  968017 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0908 12:49:29.417281  968017 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0908 12:49:29.417384  968017 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0908 12:49:29.421280  968017 start.go:563] Will wait 60s for crictl version
	I0908 12:49:29.421346  968017 ssh_runner.go:195] Run: which crictl
	I0908 12:49:29.425224  968017 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 12:49:29.463553  968017 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0908 12:49:29.463638  968017 ssh_runner.go:195] Run: crio --version
	I0908 12:49:29.506438  968017 ssh_runner.go:195] Run: crio --version
	I0908 12:49:29.547947  968017 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0908 12:49:29.549125  968017 cli_runner.go:164] Run: docker network inspect embed-certs-095356 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 12:49:29.567251  968017 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0908 12:49:29.571559  968017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 12:49:29.583638  968017 kubeadm.go:875] updating cluster {Name:embed-certs-095356 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:embed-certs-095356 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 12:49:29.583786  968017 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 12:49:29.583863  968017 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 12:49:29.628331  968017 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 12:49:29.628362  968017 crio.go:433] Images already preloaded, skipping extraction
	I0908 12:49:29.628431  968017 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 12:49:29.667577  968017 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 12:49:29.667607  968017 cache_images.go:85] Images are preloaded, skipping loading
	I0908 12:49:29.667618  968017 kubeadm.go:926] updating node { 192.168.76.2 8443 v1.34.0 crio true true} ...
	I0908 12:49:29.667774  968017 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-095356 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:embed-certs-095356 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 12:49:29.667845  968017 ssh_runner.go:195] Run: crio config
	I0908 12:49:29.714731  968017 cni.go:84] Creating CNI manager for ""
	I0908 12:49:29.714763  968017 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 12:49:29.714778  968017 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 12:49:29.714806  968017 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-095356 NodeName:embed-certs-095356 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 12:49:29.714964  968017 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-095356"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 12:49:29.715064  968017 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 12:49:29.724537  968017 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 12:49:29.724606  968017 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 12:49:29.734183  968017 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0908 12:49:29.752695  968017 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 12:49:29.770346  968017 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I0908 12:49:29.788189  968017 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0908 12:49:29.792659  968017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 12:49:29.806295  968017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:49:29.885492  968017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 12:49:29.899924  968017 certs.go:68] Setting up /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/embed-certs-095356 for IP: 192.168.76.2
	I0908 12:49:29.899947  968017 certs.go:194] generating shared ca certs ...
	I0908 12:49:29.899965  968017 certs.go:226] acquiring lock for ca certs: {Name:mkfa58237bde04d2b800b1fdade18ddbc226533f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:49:29.900170  968017 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21512-614854/.minikube/ca.key
	I0908 12:49:29.900232  968017 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21512-614854/.minikube/proxy-client-ca.key
	I0908 12:49:29.900244  968017 certs.go:256] generating profile certs ...
	I0908 12:49:29.900397  968017 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/embed-certs-095356/client.key
	I0908 12:49:29.900479  968017 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/embed-certs-095356/apiserver.key.351e8f67
	I0908 12:49:29.900529  968017 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/embed-certs-095356/proxy-client.key
	I0908 12:49:29.900673  968017 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/618620.pem (1338 bytes)
	W0908 12:49:29.900723  968017 certs.go:480] ignoring /home/jenkins/minikube-integration/21512-614854/.minikube/certs/618620_empty.pem, impossibly tiny 0 bytes
	I0908 12:49:29.900738  968017 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca-key.pem (1675 bytes)
	I0908 12:49:29.900773  968017 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem (1082 bytes)
	I0908 12:49:29.900804  968017 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/cert.pem (1123 bytes)
	I0908 12:49:29.900834  968017 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/key.pem (1675 bytes)
	I0908 12:49:29.900885  968017 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/files/etc/ssl/certs/6186202.pem (1708 bytes)
	I0908 12:49:29.901844  968017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 12:49:29.929236  968017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0908 12:49:29.955283  968017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 12:49:30.001144  968017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0908 12:49:30.083811  968017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/embed-certs-095356/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0908 12:49:30.111611  968017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/embed-certs-095356/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0908 12:49:30.137544  968017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/embed-certs-095356/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 12:49:30.162987  968017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/embed-certs-095356/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0908 12:49:30.190308  968017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/files/etc/ssl/certs/6186202.pem --> /usr/share/ca-certificates/6186202.pem (1708 bytes)
	I0908 12:49:30.216267  968017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 12:49:30.241179  968017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/certs/618620.pem --> /usr/share/ca-certificates/618620.pem (1338 bytes)
	I0908 12:49:30.266532  968017 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 12:49:30.286676  968017 ssh_runner.go:195] Run: openssl version
	I0908 12:49:30.292793  968017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6186202.pem && ln -fs /usr/share/ca-certificates/6186202.pem /etc/ssl/certs/6186202.pem"
	I0908 12:49:30.302890  968017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6186202.pem
	I0908 12:49:30.307054  968017 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 11:43 /usr/share/ca-certificates/6186202.pem
	I0908 12:49:30.307137  968017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6186202.pem
	I0908 12:49:30.314839  968017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6186202.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 12:49:30.324591  968017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 12:49:30.334856  968017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:49:30.339200  968017 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 11:33 /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:49:30.339265  968017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:49:30.346720  968017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 12:49:30.356744  968017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/618620.pem && ln -fs /usr/share/ca-certificates/618620.pem /etc/ssl/certs/618620.pem"
	I0908 12:49:30.366464  968017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/618620.pem
	I0908 12:49:30.370295  968017 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 11:43 /usr/share/ca-certificates/618620.pem
	I0908 12:49:30.370359  968017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/618620.pem
	I0908 12:49:30.377461  968017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/618620.pem /etc/ssl/certs/51391683.0"
	I0908 12:49:30.387829  968017 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 12:49:30.392212  968017 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0908 12:49:30.399604  968017 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0908 12:49:30.406794  968017 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0908 12:49:30.415234  968017 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0908 12:49:30.424814  968017 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0908 12:49:30.433404  968017 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0908 12:49:30.441261  968017 kubeadm.go:392] StartCluster: {Name:embed-certs-095356 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:embed-certs-095356 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:49:30.441390  968017 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0908 12:49:30.441443  968017 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 12:49:30.495102  968017 cri.go:89] found id: ""
	I0908 12:49:30.495193  968017 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 12:49:30.507375  968017 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0908 12:49:30.507460  968017 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0908 12:49:30.507518  968017 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0908 12:49:30.525592  968017 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0908 12:49:30.526663  968017 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-095356" does not appear in /home/jenkins/minikube-integration/21512-614854/kubeconfig
	I0908 12:49:30.527123  968017 kubeconfig.go:62] /home/jenkins/minikube-integration/21512-614854/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-095356" cluster setting kubeconfig missing "embed-certs-095356" context setting]
	I0908 12:49:30.527890  968017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/kubeconfig: {Name:mkdc8e576f612d76ffebaac15a9174cc9dea2917 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:49:30.529807  968017 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0908 12:49:30.589773  968017 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.76.2
	I0908 12:49:30.589818  968017 kubeadm.go:593] duration metric: took 82.347027ms to restartPrimaryControlPlane
	I0908 12:49:30.589831  968017 kubeadm.go:394] duration metric: took 148.584231ms to StartCluster
	I0908 12:49:30.589855  968017 settings.go:142] acquiring lock: {Name:mk9e6c1dbd6be0735fc1b1285e1ee30836d4ceb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:49:30.589960  968017 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21512-614854/kubeconfig
	I0908 12:49:30.592381  968017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/kubeconfig: {Name:mkdc8e576f612d76ffebaac15a9174cc9dea2917 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:49:30.592824  968017 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 12:49:30.593255  968017 config.go:182] Loaded profile config "embed-certs-095356": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 12:49:30.593363  968017 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0908 12:49:30.593868  968017 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-095356"
	I0908 12:49:30.593970  968017 addons.go:69] Setting metrics-server=true in profile "embed-certs-095356"
	I0908 12:49:30.594004  968017 addons.go:238] Setting addon metrics-server=true in "embed-certs-095356"
	I0908 12:49:30.594027  968017 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-095356"
	W0908 12:49:30.594087  968017 addons.go:247] addon storage-provisioner should already be in state true
	W0908 12:49:30.594034  968017 addons.go:247] addon metrics-server should already be in state true
	I0908 12:49:30.594192  968017 host.go:66] Checking if "embed-certs-095356" exists ...
	I0908 12:49:30.594784  968017 cli_runner.go:164] Run: docker container inspect embed-certs-095356 --format={{.State.Status}}
	I0908 12:49:30.593949  968017 addons.go:69] Setting dashboard=true in profile "embed-certs-095356"
	I0908 12:49:30.594998  968017 addons.go:238] Setting addon dashboard=true in "embed-certs-095356"
	W0908 12:49:30.595035  968017 addons.go:247] addon dashboard should already be in state true
	I0908 12:49:30.593936  968017 addons.go:69] Setting default-storageclass=true in profile "embed-certs-095356"
	I0908 12:49:30.595128  968017 host.go:66] Checking if "embed-certs-095356" exists ...
	I0908 12:49:30.595190  968017 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-095356"
	I0908 12:49:30.595779  968017 cli_runner.go:164] Run: docker container inspect embed-certs-095356 --format={{.State.Status}}
	I0908 12:49:30.595794  968017 cli_runner.go:164] Run: docker container inspect embed-certs-095356 --format={{.State.Status}}
	I0908 12:49:30.596312  968017 out.go:179] * Verifying Kubernetes components...
	I0908 12:49:30.596702  968017 host.go:66] Checking if "embed-certs-095356" exists ...
	I0908 12:49:30.597381  968017 cli_runner.go:164] Run: docker container inspect embed-certs-095356 --format={{.State.Status}}
	I0908 12:49:30.598769  968017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:49:30.627242  968017 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0908 12:49:30.627336  968017 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 12:49:30.629053  968017 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0908 12:49:30.629102  968017 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0908 12:49:30.629240  968017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-095356
	I0908 12:49:30.629609  968017 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 12:49:30.629646  968017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 12:49:30.629710  968017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-095356
	I0908 12:49:30.630069  968017 addons.go:238] Setting addon default-storageclass=true in "embed-certs-095356"
	W0908 12:49:30.630103  968017 addons.go:247] addon default-storageclass should already be in state true
	I0908 12:49:30.630136  968017 host.go:66] Checking if "embed-certs-095356" exists ...
	I0908 12:49:30.630646  968017 cli_runner.go:164] Run: docker container inspect embed-certs-095356 --format={{.State.Status}}
	I0908 12:49:30.636302  968017 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0908 12:49:30.637830  968017 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0908 12:49:30.639045  968017 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0908 12:49:30.639070  968017 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0908 12:49:30.639141  968017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-095356
	I0908 12:49:30.654868  968017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/embed-certs-095356/id_rsa Username:docker}
	I0908 12:49:30.656480  968017 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 12:49:30.656502  968017 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 12:49:30.656564  968017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-095356
	I0908 12:49:30.657798  968017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/embed-certs-095356/id_rsa Username:docker}
	I0908 12:49:30.664788  968017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/embed-certs-095356/id_rsa Username:docker}
	I0908 12:49:30.677548  968017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/embed-certs-095356/id_rsa Username:docker}
	I0908 12:49:30.977337  968017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 12:49:30.993616  968017 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0908 12:49:30.993641  968017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0908 12:49:31.081647  968017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 12:49:31.089969  968017 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0908 12:49:31.090006  968017 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0908 12:49:31.178840  968017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 12:49:31.184530  968017 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0908 12:49:31.184567  968017 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0908 12:49:31.195426  968017 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0908 12:49:31.195464  968017 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0908 12:49:31.294073  968017 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0908 12:49:31.294120  968017 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0908 12:49:31.299350  968017 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 12:49:31.299386  968017 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0908 12:49:31.393389  968017 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0908 12:49:31.393423  968017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0908 12:49:31.397630  968017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 12:49:31.491283  968017 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0908 12:49:31.491319  968017 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0908 12:49:31.514037  968017 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0908 12:49:31.514072  968017 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0908 12:49:31.598921  968017 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0908 12:49:31.599030  968017 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0908 12:49:31.676276  968017 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0908 12:49:31.676308  968017 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0908 12:49:31.702967  968017 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0908 12:49:31.702996  968017 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0908 12:49:31.722900  968017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0908 12:49:34.500652  968017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.523264613s)
	I0908 12:49:34.500807  968017 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.419110464s)
	I0908 12:49:34.500856  968017 node_ready.go:35] waiting up to 6m0s for node "embed-certs-095356" to be "Ready" ...
	I0908 12:49:34.594680  968017 node_ready.go:49] node "embed-certs-095356" is "Ready"
	I0908 12:49:34.594723  968017 node_ready.go:38] duration metric: took 93.848547ms for node "embed-certs-095356" to be "Ready" ...
	I0908 12:49:34.594743  968017 api_server.go:52] waiting for apiserver process to appear ...
	I0908 12:49:34.594802  968017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 12:49:36.709062  968017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.530174828s)
	I0908 12:49:36.709183  968017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.311509171s)
	I0908 12:49:36.709220  968017 addons.go:479] Verifying addon metrics-server=true in "embed-certs-095356"
	I0908 12:49:36.709338  968017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.98638923s)
	I0908 12:49:36.709389  968017 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.114563032s)
	I0908 12:49:36.709423  968017 api_server.go:72] duration metric: took 6.116557346s to wait for apiserver process to appear ...
	I0908 12:49:36.709467  968017 api_server.go:88] waiting for apiserver healthz status ...
	I0908 12:49:36.709490  968017 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0908 12:49:36.711538  968017 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-095356 addons enable metrics-server
	
	I0908 12:49:36.713424  968017 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0908 12:49:36.714376  968017 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 12:49:36.714401  968017 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 12:49:36.714765  968017 addons.go:514] duration metric: took 6.121413185s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0908 12:49:37.209650  968017 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0908 12:49:37.215303  968017 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 12:49:37.215336  968017 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 12:49:37.709615  968017 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0908 12:49:37.715220  968017 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0908 12:49:37.716454  968017 api_server.go:141] control plane version: v1.34.0
	I0908 12:49:37.716483  968017 api_server.go:131] duration metric: took 1.007008535s to wait for apiserver health ...
	I0908 12:49:37.716492  968017 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 12:49:37.720243  968017 system_pods.go:59] 9 kube-system pods found
	I0908 12:49:37.720291  968017 system_pods.go:61] "coredns-66bc5c9577-vmqhr" [979cc534-4ad1-434a-9838-7712ba9213b4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 12:49:37.720308  968017 system_pods.go:61] "etcd-embed-certs-095356" [302f42a3-4f92-4028-9ef1-60a84815daef] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 12:49:37.720315  968017 system_pods.go:61] "kindnet-8grdw" [8f60ebac-5974-4e25-a33d-1c82b8618cbc] Running
	I0908 12:49:37.720321  968017 system_pods.go:61] "kube-apiserver-embed-certs-095356" [0b2fc077-f650-4ade-8239-810edf4b9d9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 12:49:37.720327  968017 system_pods.go:61] "kube-controller-manager-embed-certs-095356" [b08ed252-184b-4d37-a752-e16594aa4a38] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 12:49:37.720334  968017 system_pods.go:61] "kube-proxy-rk7d4" [7e92a5d1-aac6-48cb-9460-45706cda644d] Running
	I0908 12:49:37.720340  968017 system_pods.go:61] "kube-scheduler-embed-certs-095356" [0df658e9-5081-432d-a4a7-df0a452660ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 12:49:37.720348  968017 system_pods.go:61] "metrics-server-746fcd58dc-kw49k" [ccedbe2e-6b00-427c-aee5-d32661187320] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 12:49:37.720362  968017 system_pods.go:61] "storage-provisioner" [ef3ef697-29b5-4d95-9fce-0d4e14bf1575] Running
	I0908 12:49:37.720370  968017 system_pods.go:74] duration metric: took 3.871512ms to wait for pod list to return data ...
	I0908 12:49:37.720381  968017 default_sa.go:34] waiting for default service account to be created ...
	I0908 12:49:37.723566  968017 default_sa.go:45] found service account: "default"
	I0908 12:49:37.723599  968017 default_sa.go:55] duration metric: took 3.211119ms for default service account to be created ...
	I0908 12:49:37.723612  968017 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 12:49:37.726952  968017 system_pods.go:86] 9 kube-system pods found
	I0908 12:49:37.726991  968017 system_pods.go:89] "coredns-66bc5c9577-vmqhr" [979cc534-4ad1-434a-9838-7712ba9213b4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 12:49:37.727005  968017 system_pods.go:89] "etcd-embed-certs-095356" [302f42a3-4f92-4028-9ef1-60a84815daef] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 12:49:37.727018  968017 system_pods.go:89] "kindnet-8grdw" [8f60ebac-5974-4e25-a33d-1c82b8618cbc] Running
	I0908 12:49:37.727028  968017 system_pods.go:89] "kube-apiserver-embed-certs-095356" [0b2fc077-f650-4ade-8239-810edf4b9d9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 12:49:37.727037  968017 system_pods.go:89] "kube-controller-manager-embed-certs-095356" [b08ed252-184b-4d37-a752-e16594aa4a38] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 12:49:37.727047  968017 system_pods.go:89] "kube-proxy-rk7d4" [7e92a5d1-aac6-48cb-9460-45706cda644d] Running
	I0908 12:49:37.727056  968017 system_pods.go:89] "kube-scheduler-embed-certs-095356" [0df658e9-5081-432d-a4a7-df0a452660ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 12:49:37.727068  968017 system_pods.go:89] "metrics-server-746fcd58dc-kw49k" [ccedbe2e-6b00-427c-aee5-d32661187320] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 12:49:37.727077  968017 system_pods.go:89] "storage-provisioner" [ef3ef697-29b5-4d95-9fce-0d4e14bf1575] Running
	I0908 12:49:37.727090  968017 system_pods.go:126] duration metric: took 3.469285ms to wait for k8s-apps to be running ...
	I0908 12:49:37.727103  968017 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 12:49:37.727180  968017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 12:49:37.739211  968017 system_svc.go:56] duration metric: took 12.098934ms WaitForService to wait for kubelet
	I0908 12:49:37.739249  968017 kubeadm.go:578] duration metric: took 7.146380991s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 12:49:37.739275  968017 node_conditions.go:102] verifying NodePressure condition ...
	I0908 12:49:37.742737  968017 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0908 12:49:37.742768  968017 node_conditions.go:123] node cpu capacity is 8
	I0908 12:49:37.742783  968017 node_conditions.go:105] duration metric: took 3.502532ms to run NodePressure ...
	I0908 12:49:37.742797  968017 start.go:241] waiting for startup goroutines ...
	I0908 12:49:37.742806  968017 start.go:246] waiting for cluster config update ...
	I0908 12:49:37.742820  968017 start.go:255] writing updated cluster config ...
	I0908 12:49:37.743123  968017 ssh_runner.go:195] Run: rm -f paused
	I0908 12:49:37.747094  968017 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 12:49:37.751159  968017 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vmqhr" in "kube-system" namespace to be "Ready" or be gone ...
	W0908 12:49:39.757202  968017 pod_ready.go:104] pod "coredns-66bc5c9577-vmqhr" is not "Ready", error: <nil>
	W0908 12:49:41.781793  968017 pod_ready.go:104] pod "coredns-66bc5c9577-vmqhr" is not "Ready", error: <nil>
	W0908 12:49:44.257272  968017 pod_ready.go:104] pod "coredns-66bc5c9577-vmqhr" is not "Ready", error: <nil>
	W0908 12:49:46.757216  968017 pod_ready.go:104] pod "coredns-66bc5c9577-vmqhr" is not "Ready", error: <nil>
	W0908 12:49:49.257899  968017 pod_ready.go:104] pod "coredns-66bc5c9577-vmqhr" is not "Ready", error: <nil>
	W0908 12:49:51.757748  968017 pod_ready.go:104] pod "coredns-66bc5c9577-vmqhr" is not "Ready", error: <nil>
	W0908 12:49:54.257571  968017 pod_ready.go:104] pod "coredns-66bc5c9577-vmqhr" is not "Ready", error: <nil>
	W0908 12:49:56.257624  968017 pod_ready.go:104] pod "coredns-66bc5c9577-vmqhr" is not "Ready", error: <nil>
	W0908 12:49:58.757397  968017 pod_ready.go:104] pod "coredns-66bc5c9577-vmqhr" is not "Ready", error: <nil>
	W0908 12:50:01.256722  968017 pod_ready.go:104] pod "coredns-66bc5c9577-vmqhr" is not "Ready", error: <nil>
	W0908 12:50:03.257559  968017 pod_ready.go:104] pod "coredns-66bc5c9577-vmqhr" is not "Ready", error: <nil>
	W0908 12:50:05.757014  968017 pod_ready.go:104] pod "coredns-66bc5c9577-vmqhr" is not "Ready", error: <nil>
	W0908 12:50:08.257679  968017 pod_ready.go:104] pod "coredns-66bc5c9577-vmqhr" is not "Ready", error: <nil>
	W0908 12:50:10.757606  968017 pod_ready.go:104] pod "coredns-66bc5c9577-vmqhr" is not "Ready", error: <nil>
	W0908 12:50:13.257013  968017 pod_ready.go:104] pod "coredns-66bc5c9577-vmqhr" is not "Ready", error: <nil>
	I0908 12:50:13.758380  968017 pod_ready.go:94] pod "coredns-66bc5c9577-vmqhr" is "Ready"
	I0908 12:50:13.758411  968017 pod_ready.go:86] duration metric: took 36.007221342s for pod "coredns-66bc5c9577-vmqhr" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:50:13.761685  968017 pod_ready.go:83] waiting for pod "etcd-embed-certs-095356" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:50:13.767420  968017 pod_ready.go:94] pod "etcd-embed-certs-095356" is "Ready"
	I0908 12:50:13.767459  968017 pod_ready.go:86] duration metric: took 5.743199ms for pod "etcd-embed-certs-095356" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:50:13.862817  968017 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-095356" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:50:13.869826  968017 pod_ready.go:94] pod "kube-apiserver-embed-certs-095356" is "Ready"
	I0908 12:50:13.869857  968017 pod_ready.go:86] duration metric: took 7.008074ms for pod "kube-apiserver-embed-certs-095356" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:50:13.872326  968017 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-095356" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:50:13.954638  968017 pod_ready.go:94] pod "kube-controller-manager-embed-certs-095356" is "Ready"
	I0908 12:50:13.954671  968017 pod_ready.go:86] duration metric: took 82.317504ms for pod "kube-controller-manager-embed-certs-095356" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:50:14.154975  968017 pod_ready.go:83] waiting for pod "kube-proxy-rk7d4" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:50:14.555035  968017 pod_ready.go:94] pod "kube-proxy-rk7d4" is "Ready"
	I0908 12:50:14.555083  968017 pod_ready.go:86] duration metric: took 400.07973ms for pod "kube-proxy-rk7d4" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:50:14.755134  968017 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-095356" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:50:15.155832  968017 pod_ready.go:94] pod "kube-scheduler-embed-certs-095356" is "Ready"
	I0908 12:50:15.155864  968017 pod_ready.go:86] duration metric: took 400.702953ms for pod "kube-scheduler-embed-certs-095356" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:50:15.155885  968017 pod_ready.go:40] duration metric: took 37.408745743s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 12:50:15.202619  968017 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0908 12:50:15.204368  968017 out.go:179] * Done! kubectl is now configured to use "embed-certs-095356" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 08 12:53:36 no-preload-997730 crio[681]: time="2025-09-08 12:53:36.686473779Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=35e36f0a-68b2-44e3-ac0f-645edb8d8baf name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:53:45 no-preload-997730 crio[681]: time="2025-09-08 12:53:45.685583003Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=ef36b634-aede-4a2a-9cda-f89ba94a93eb name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:53:45 no-preload-997730 crio[681]: time="2025-09-08 12:53:45.685884265Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=ef36b634-aede-4a2a-9cda-f89ba94a93eb name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:53:51 no-preload-997730 crio[681]: time="2025-09-08 12:53:51.685405067Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=0ed8fa19-b9a5-449c-9c34-54258cb7ff4b name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:53:51 no-preload-997730 crio[681]: time="2025-09-08 12:53:51.685743400Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=0ed8fa19-b9a5-449c-9c34-54258cb7ff4b name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:53:57 no-preload-997730 crio[681]: time="2025-09-08 12:53:57.686089146Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=13f105be-620e-452a-bf42-9611e0213ba3 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:53:57 no-preload-997730 crio[681]: time="2025-09-08 12:53:57.686432740Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=13f105be-620e-452a-bf42-9611e0213ba3 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:54:02 no-preload-997730 crio[681]: time="2025-09-08 12:54:02.685550903Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=ca103859-bb6f-4e5c-b377-158da0dd044a name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:54:02 no-preload-997730 crio[681]: time="2025-09-08 12:54:02.685868753Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=ca103859-bb6f-4e5c-b377-158da0dd044a name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:54:11 no-preload-997730 crio[681]: time="2025-09-08 12:54:11.685962503Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=afb83c33-a095-49a4-b47f-64e8c81b1231 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:54:11 no-preload-997730 crio[681]: time="2025-09-08 12:54:11.686318026Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=afb83c33-a095-49a4-b47f-64e8c81b1231 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:54:15 no-preload-997730 crio[681]: time="2025-09-08 12:54:15.686205623Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=55a40884-ccb3-4c27-8a55-822bd425d7a8 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:54:15 no-preload-997730 crio[681]: time="2025-09-08 12:54:15.686495525Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=55a40884-ccb3-4c27-8a55-822bd425d7a8 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:54:23 no-preload-997730 crio[681]: time="2025-09-08 12:54:23.686221338Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=4bfdc7a7-15e0-4cb8-b681-96cb0eedf1aa name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:54:23 no-preload-997730 crio[681]: time="2025-09-08 12:54:23.686446689Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=4bfdc7a7-15e0-4cb8-b681-96cb0eedf1aa name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:54:26 no-preload-997730 crio[681]: time="2025-09-08 12:54:26.685216687Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=8cf4ed40-f65a-4522-9224-40127fdabe93 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:54:26 no-preload-997730 crio[681]: time="2025-09-08 12:54:26.685499183Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=8cf4ed40-f65a-4522-9224-40127fdabe93 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:54:35 no-preload-997730 crio[681]: time="2025-09-08 12:54:35.685449666Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=b04f4f3b-318f-4eca-a367-a04417761f3c name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:54:35 no-preload-997730 crio[681]: time="2025-09-08 12:54:35.685709291Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=b04f4f3b-318f-4eca-a367-a04417761f3c name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:54:38 no-preload-997730 crio[681]: time="2025-09-08 12:54:38.686800850Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=723d0326-d543-4a6d-8335-4acfb99301e2 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:54:38 no-preload-997730 crio[681]: time="2025-09-08 12:54:38.687608910Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=723d0326-d543-4a6d-8335-4acfb99301e2 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:54:46 no-preload-997730 crio[681]: time="2025-09-08 12:54:46.685644029Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=2d7ff576-7697-485e-be5d-82b42492ae54 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:54:46 no-preload-997730 crio[681]: time="2025-09-08 12:54:46.685935644Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=2d7ff576-7697-485e-be5d-82b42492ae54 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:54:50 no-preload-997730 crio[681]: time="2025-09-08 12:54:50.686281214Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=7370c7f8-7cc5-49bf-b663-d816c0de0df1 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:54:50 no-preload-997730 crio[681]: time="2025-09-08 12:54:50.686602066Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=7370c7f8-7cc5-49bf-b663-d816c0de0df1 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	dc897e73c038b       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   About a minute ago   Exited              dashboard-metrics-scraper   8                   9b4a649594034       dashboard-metrics-scraper-6ffb444bf9-c5f6j
	4b88512f9e94b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago       Running             storage-provisioner         2                   992ea9eb9f38c       storage-provisioner
	9b7598014ed3c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   18 minutes ago       Running             coredns                     1                   bd14f8125a980       coredns-66bc5c9577-nd9km
	7d5304b0662ac       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   18 minutes ago       Running             kindnet-cni                 1                   ae1d5245ed943       kindnet-rm2cd
	dc14c810d71cb       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c   18 minutes ago       Running             busybox                     1                   6ee42b98d10aa       busybox
	c7df65c482a8f       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   18 minutes ago       Running             kube-proxy                  1                   b156bc3007a0f       kube-proxy-wqscj
	cf99178b116a8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago       Exited              storage-provisioner         1                   992ea9eb9f38c       storage-provisioner
	3e76448df1da8       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   18 minutes ago       Running             kube-controller-manager     1                   d771a760cefb3       kube-controller-manager-no-preload-997730
	81618c2be90c6       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   18 minutes ago       Running             kube-scheduler              1                   55f30126795dc       kube-scheduler-no-preload-997730
	0ba9ac2cbe3a9       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   18 minutes ago       Running             kube-apiserver              1                   a7e4455729718       kube-apiserver-no-preload-997730
	7758c72adbf53       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   18 minutes ago       Running             etcd                        1                   22fb0fadc4a8d       etcd-no-preload-997730
	
	
	==> coredns [9b7598014ed3cbf3509fb26017bbe743376f7422073001c375ef931c3ea55887] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46191 - 23439 "HINFO IN 4591962211074296713.5651574647663328576. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.093395135s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-997730
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-997730
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a399eb27affc71ce2737faeeac659fc2ce938c64
	                    minikube.k8s.io/name=no-preload-997730
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T12_35_17_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 12:35:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-997730
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 12:54:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 12:53:23 +0000   Mon, 08 Sep 2025 12:35:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 12:53:23 +0000   Mon, 08 Sep 2025 12:35:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 12:53:23 +0000   Mon, 08 Sep 2025 12:35:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 12:53:23 +0000   Mon, 08 Sep 2025 12:35:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-997730
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	System Info:
	  Machine ID:                 08457c5968f44b66854208e727a11fe6
	  System UUID:                002053e4-2f46-4bc1-878b-646a0ed65720
	  Boot ID:                    1bb31c1a-3b78-4ea1-9977-d0689f279875
	  Kernel Version:             5.15.0-1083-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-66bc5c9577-nd9km                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     19m
	  kube-system                 etcd-no-preload-997730                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         19m
	  kube-system                 kindnet-rm2cd                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-no-preload-997730              250m (3%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-no-preload-997730     200m (2%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-wqscj                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-no-preload-997730              100m (1%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 metrics-server-746fcd58dc-c8jxj               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         19m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-c5f6j    0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-vwd7n         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 19m                kube-proxy       
	  Normal   Starting                 18m                kube-proxy       
	  Warning  CgroupV1                 19m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node no-preload-997730 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node no-preload-997730 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     19m (x8 over 19m)  kubelet          Node no-preload-997730 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  19m                kubelet          Node no-preload-997730 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 19m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    19m                kubelet          Node no-preload-997730 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     19m                kubelet          Node no-preload-997730 status is now: NodeHasSufficientPID
	  Normal   Starting                 19m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           19m                node-controller  Node no-preload-997730 event: Registered Node no-preload-997730 in Controller
	  Normal   NodeReady                19m                kubelet          Node no-preload-997730 status is now: NodeReady
	  Normal   Starting                 18m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 18m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node no-preload-997730 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node no-preload-997730 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m (x8 over 18m)  kubelet          Node no-preload-997730 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           18m                node-controller  Node no-preload-997730 event: Registered Node no-preload-997730 in Controller
	
	
	==> dmesg <==
	[  +0.000002] ll header: 00000000: e2 66 ae f3 69 01 3a b2 9c 4b bf 51 08 00
	[  +1.006042] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-aec4dfba5a81
	[  +0.000007] ll header: 00000000: e2 66 ae f3 69 01 3a b2 9c 4b bf 51 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-aec4dfba5a81
	[  +0.000002] ll header: 00000000: e2 66 ae f3 69 01 3a b2 9c 4b bf 51 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-aec4dfba5a81
	[  +0.000001] ll header: 00000000: e2 66 ae f3 69 01 3a b2 9c 4b bf 51 08 00
	[  +2.015807] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-aec4dfba5a81
	[  +0.000007] ll header: 00000000: e2 66 ae f3 69 01 3a b2 9c 4b bf 51 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-aec4dfba5a81
	[  +0.000003] ll header: 00000000: e2 66 ae f3 69 01 3a b2 9c 4b bf 51 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-aec4dfba5a81
	[  +0.000001] ll header: 00000000: e2 66 ae f3 69 01 3a b2 9c 4b bf 51 08 00
	[  +4.251670] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-aec4dfba5a81
	[  +0.000007] ll header: 00000000: e2 66 ae f3 69 01 3a b2 9c 4b bf 51 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-aec4dfba5a81
	[  +0.000002] ll header: 00000000: e2 66 ae f3 69 01 3a b2 9c 4b bf 51 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-aec4dfba5a81
	[  +0.000001] ll header: 00000000: e2 66 ae f3 69 01 3a b2 9c 4b bf 51 08 00
	[  +8.195202] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-aec4dfba5a81
	[  +0.000006] ll header: 00000000: e2 66 ae f3 69 01 3a b2 9c 4b bf 51 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-aec4dfba5a81
	[  +0.000002] ll header: 00000000: e2 66 ae f3 69 01 3a b2 9c 4b bf 51 08 00
	[  +0.000008] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-aec4dfba5a81
	[  +0.000001] ll header: 00000000: e2 66 ae f3 69 01 3a b2 9c 4b bf 51 08 00
	
	
	==> etcd [7758c72adbf536f74cef6a4ad79725287c23eb8faa60606d60c46846663f4562] <==
	{"level":"warn","ts":"2025-09-08T12:36:12.123956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:12.130574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:12.137435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:12.144556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:12.151325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:12.183544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:12.191225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:12.198556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:12.205864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:12.212827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:12.220457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:12.228374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:12.262989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:12.275864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:12.282761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:12.332610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33676","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-08T12:46:11.305957Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1023}
	{"level":"info","ts":"2025-09-08T12:46:11.325929Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1023,"took":"19.599129ms","hash":1163487700,"current-db-size-bytes":3194880,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1302528,"current-db-size-in-use":"1.3 MB"}
	{"level":"info","ts":"2025-09-08T12:46:11.326016Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1163487700,"revision":1023,"compact-revision":-1}
	{"level":"warn","ts":"2025-09-08T12:46:56.505729Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"120.344666ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-08T12:46:56.505844Z","caller":"traceutil/trace.go:172","msg":"trace[819053526] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1342; }","duration":"120.478356ms","start":"2025-09-08T12:46:56.385342Z","end":"2025-09-08T12:46:56.505821Z","steps":["trace[819053526] 'agreement among raft nodes before linearized reading'  (duration: 54.522275ms)","trace[819053526] 'range keys from in-memory index tree'  (duration: 65.785679ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-08T12:46:56.506011Z","caller":"traceutil/trace.go:172","msg":"trace[398850718] transaction","detail":"{read_only:false; response_revision:1343; number_of_response:1; }","duration":"127.911153ms","start":"2025-09-08T12:46:56.378061Z","end":"2025-09-08T12:46:56.505972Z","steps":["trace[398850718] 'process raft request'  (duration: 61.84444ms)","trace[398850718] 'compare'  (duration: 65.752117ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-08T12:51:11.311140Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1302}
	{"level":"info","ts":"2025-09-08T12:51:11.313986Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1302,"took":"2.518813ms","hash":3556854883,"current-db-size-bytes":3194880,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1851392,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-09-08T12:51:11.314029Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3556854883,"revision":1302,"compact-revision":1023}
	
	
	==> kernel <==
	 12:54:58 up  3:37,  0 users,  load average: 1.33, 1.17, 1.46
	Linux no-preload-997730 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [7d5304b0662ac1fc51eea22f8771df6cd4abb3454c9dea4d4ca00b695c659936] <==
	I0908 12:52:55.478217       1 main.go:301] handling current node
	I0908 12:53:05.478028       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0908 12:53:05.478059       1 main.go:301] handling current node
	I0908 12:53:15.477367       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0908 12:53:15.477406       1 main.go:301] handling current node
	I0908 12:53:25.477919       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0908 12:53:25.477956       1 main.go:301] handling current node
	I0908 12:53:35.478023       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0908 12:53:35.478077       1 main.go:301] handling current node
	I0908 12:53:45.477599       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0908 12:53:45.477639       1 main.go:301] handling current node
	I0908 12:53:55.477999       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0908 12:53:55.478038       1 main.go:301] handling current node
	I0908 12:54:05.477524       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0908 12:54:05.477562       1 main.go:301] handling current node
	I0908 12:54:15.477338       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0908 12:54:15.477370       1 main.go:301] handling current node
	I0908 12:54:25.477591       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0908 12:54:25.477632       1 main.go:301] handling current node
	I0908 12:54:35.477777       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0908 12:54:35.477813       1 main.go:301] handling current node
	I0908 12:54:45.478059       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0908 12:54:45.478108       1 main.go:301] handling current node
	I0908 12:54:55.477684       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0908 12:54:55.477714       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0ba9ac2cbe3a9cb4b99cde9ed049902e9ce18e226d30a8ace56daec47cfdf923] <==
	I0908 12:51:14.126700       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0908 12:52:14.126093       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 12:52:14.126147       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0908 12:52:14.126162       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0908 12:52:14.127184       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 12:52:14.127244       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0908 12:52:14.127262       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0908 12:52:16.279169       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:52:17.430435       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:53:27.532392       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:53:28.448773       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0908 12:54:14.127124       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 12:54:14.127183       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0908 12:54:14.127203       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0908 12:54:14.128336       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 12:54:14.128450       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0908 12:54:14.128469       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0908 12:54:52.057329       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:54:52.255861       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [3e76448df1da821f94c9676400494d50c5d1a2bc66c1a739602e3c46bf44a9b9] <==
	I0908 12:48:48.628532       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:49:18.528403       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:49:18.636978       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:49:48.532677       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:49:48.643740       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:50:18.537625       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:50:18.650802       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:50:48.542238       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:50:48.658137       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:51:18.547425       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:51:18.665313       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:51:48.552249       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:51:48.673635       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:52:18.557434       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:52:18.682136       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:52:48.561876       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:52:48.689917       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:53:18.567310       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:53:18.698835       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:53:48.571422       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:53:48.706748       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:54:18.576763       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:54:18.713934       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:54:48.581158       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:54:48.721369       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [c7df65c482a8fad4642d74c3b7444627e0f118a4a4ea911a84ce57eea427c96a] <==
	I0908 12:36:15.283977       1 server_linux.go:53] "Using iptables proxy"
	I0908 12:36:15.517200       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 12:36:15.618033       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 12:36:15.618078       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E0908 12:36:15.618180       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 12:36:15.694394       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 12:36:15.694474       1 server_linux.go:132] "Using iptables Proxier"
	I0908 12:36:15.700109       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 12:36:15.700523       1 server.go:527] "Version info" version="v1.34.0"
	I0908 12:36:15.700603       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 12:36:15.702162       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 12:36:15.702183       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 12:36:15.702186       1 config.go:106] "Starting endpoint slice config controller"
	I0908 12:36:15.702218       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 12:36:15.702314       1 config.go:309] "Starting node config controller"
	I0908 12:36:15.702332       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 12:36:15.702339       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 12:36:15.702408       1 config.go:200] "Starting service config controller"
	I0908 12:36:15.702477       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 12:36:15.803226       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 12:36:15.803247       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0908 12:36:15.803227       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [81618c2be90c6613260ac7ead7b58db175be969f12d6dba8ce9913573920b8fc] <==
	I0908 12:36:11.315087       1 serving.go:386] Generated self-signed cert in-memory
	W0908 12:36:13.077215       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0908 12:36:13.077371       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0908 12:36:13.077436       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0908 12:36:13.077476       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0908 12:36:13.283180       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0908 12:36:13.283327       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 12:36:13.290400       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 12:36:13.290485       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 12:36:13.290513       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0908 12:36:13.290682       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0908 12:36:13.390622       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 08 12:54:12 no-preload-997730 kubelet[816]: I0908 12:54:12.685225     816 scope.go:117] "RemoveContainer" containerID="dc897e73c038ba033002ee0649be1dc1ee85a4970b36bcbd8b439e077af80168"
	Sep 08 12:54:12 no-preload-997730 kubelet[816]: E0908 12:54:12.685472     816 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-c5f6j_kubernetes-dashboard(ee075aa8-c59d-4df7-9ff2-ed3037d29bd7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-c5f6j" podUID="ee075aa8-c59d-4df7-9ff2-ed3037d29bd7"
	Sep 08 12:54:15 no-preload-997730 kubelet[816]: E0908 12:54:15.686883     816 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vwd7n" podUID="537a29c5-ffc1-49e3-8a70-737656b3a999"
	Sep 08 12:54:18 no-preload-997730 kubelet[816]: E0908 12:54:18.872927     816 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757336058872660832  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 08 12:54:18 no-preload-997730 kubelet[816]: E0908 12:54:18.872978     816 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757336058872660832  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 08 12:54:23 no-preload-997730 kubelet[816]: I0908 12:54:23.685532     816 scope.go:117] "RemoveContainer" containerID="dc897e73c038ba033002ee0649be1dc1ee85a4970b36bcbd8b439e077af80168"
	Sep 08 12:54:23 no-preload-997730 kubelet[816]: E0908 12:54:23.685737     816 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-c5f6j_kubernetes-dashboard(ee075aa8-c59d-4df7-9ff2-ed3037d29bd7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-c5f6j" podUID="ee075aa8-c59d-4df7-9ff2-ed3037d29bd7"
	Sep 08 12:54:23 no-preload-997730 kubelet[816]: E0908 12:54:23.686793     816 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-c8jxj" podUID="31558cc6-5760-42fa-85e9-9318bb3d4398"
	Sep 08 12:54:26 no-preload-997730 kubelet[816]: E0908 12:54:26.685829     816 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vwd7n" podUID="537a29c5-ffc1-49e3-8a70-737656b3a999"
	Sep 08 12:54:28 no-preload-997730 kubelet[816]: E0908 12:54:28.874146     816 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757336068873894376  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 08 12:54:28 no-preload-997730 kubelet[816]: E0908 12:54:28.874183     816 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757336068873894376  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 08 12:54:35 no-preload-997730 kubelet[816]: E0908 12:54:35.686071     816 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-c8jxj" podUID="31558cc6-5760-42fa-85e9-9318bb3d4398"
	Sep 08 12:54:38 no-preload-997730 kubelet[816]: I0908 12:54:38.686191     816 scope.go:117] "RemoveContainer" containerID="dc897e73c038ba033002ee0649be1dc1ee85a4970b36bcbd8b439e077af80168"
	Sep 08 12:54:38 no-preload-997730 kubelet[816]: E0908 12:54:38.686426     816 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-c5f6j_kubernetes-dashboard(ee075aa8-c59d-4df7-9ff2-ed3037d29bd7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-c5f6j" podUID="ee075aa8-c59d-4df7-9ff2-ed3037d29bd7"
	Sep 08 12:54:38 no-preload-997730 kubelet[816]: E0908 12:54:38.688837     816 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vwd7n" podUID="537a29c5-ffc1-49e3-8a70-737656b3a999"
	Sep 08 12:54:38 no-preload-997730 kubelet[816]: E0908 12:54:38.875876     816 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757336078875554905  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 08 12:54:38 no-preload-997730 kubelet[816]: E0908 12:54:38.875920     816 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757336078875554905  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 08 12:54:46 no-preload-997730 kubelet[816]: E0908 12:54:46.686325     816 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-c8jxj" podUID="31558cc6-5760-42fa-85e9-9318bb3d4398"
	Sep 08 12:54:48 no-preload-997730 kubelet[816]: E0908 12:54:48.877525     816 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757336088877263062  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 08 12:54:48 no-preload-997730 kubelet[816]: E0908 12:54:48.877575     816 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757336088877263062  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 08 12:54:50 no-preload-997730 kubelet[816]: E0908 12:54:50.687016     816 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vwd7n" podUID="537a29c5-ffc1-49e3-8a70-737656b3a999"
	Sep 08 12:54:53 no-preload-997730 kubelet[816]: I0908 12:54:53.685487     816 scope.go:117] "RemoveContainer" containerID="dc897e73c038ba033002ee0649be1dc1ee85a4970b36bcbd8b439e077af80168"
	Sep 08 12:54:53 no-preload-997730 kubelet[816]: E0908 12:54:53.685688     816 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-c5f6j_kubernetes-dashboard(ee075aa8-c59d-4df7-9ff2-ed3037d29bd7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-c5f6j" podUID="ee075aa8-c59d-4df7-9ff2-ed3037d29bd7"
	Sep 08 12:54:58 no-preload-997730 kubelet[816]: E0908 12:54:58.878992     816 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757336098878702180  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 08 12:54:58 no-preload-997730 kubelet[816]: E0908 12:54:58.879034     816 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757336098878702180  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	
	
	==> storage-provisioner [4b88512f9e94b34a706fe9465eeda8e748132a10da99bd52c7082bcde29020b4] <==
	W0908 12:54:33.846036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:54:35.849778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:54:35.854419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:54:37.857976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:54:37.862719       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:54:39.865799       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:54:39.870827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:54:41.874945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:54:41.879105       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:54:43.882812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:54:43.888324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:54:45.891737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:54:45.896782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:54:47.899974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:54:47.905643       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:54:49.909346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:54:49.913247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:54:51.916865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:54:51.920759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:54:53.924407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:54:53.930111       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:54:55.933280       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:54:55.937606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:54:57.940887       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:54:57.947083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [cf99178b116a82cfd92234054f6617f745bf03bb9d326579538f99f84f849627] <==
	I0908 12:36:14.984575       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0908 12:36:44.991155       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-997730 -n no-preload-997730
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-997730 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-c8jxj kubernetes-dashboard-855c9754f9-vwd7n
helpers_test.go:282: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context no-preload-997730 describe pod metrics-server-746fcd58dc-c8jxj kubernetes-dashboard-855c9754f9-vwd7n
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context no-preload-997730 describe pod metrics-server-746fcd58dc-c8jxj kubernetes-dashboard-855c9754f9-vwd7n: exit status 1 (62.256934ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-c8jxj" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-vwd7n" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context no-preload-997730 describe pod metrics-server-746fcd58dc-c8jxj kubernetes-dashboard-855c9754f9-vwd7n: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (542.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4bds7" [81cc6553-f21a-4023-ba22-ee82ccc64adb] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:285: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-039958 -n default-k8s-diff-port-039958
start_stop_delete_test.go:285: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-09-08 12:55:37.584941041 +0000 UTC m=+4956.877683515
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-039958 describe po kubernetes-dashboard-855c9754f9-4bds7 -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) kubectl --context default-k8s-diff-port-039958 describe po kubernetes-dashboard-855c9754f9-4bds7 -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-4bds7
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             default-k8s-diff-port-039958/192.168.103.2
Start Time:       Mon, 08 Sep 2025 12:37:04 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-65z8c (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-65z8c:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  18m                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4bds7 to default-k8s-diff-port-039958
Normal   Pulling    13m (x5 over 18m)     kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     13m (x5 over 18m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     13m (x5 over 18m)     kubelet            Error: ErrImagePull
Normal   BackOff    3m22s (x49 over 18m)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     2m58s (x51 over 18m)  kubelet            Error: ImagePullBackOff
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-039958 logs kubernetes-dashboard-855c9754f9-4bds7 -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-039958 logs kubernetes-dashboard-855c9754f9-4bds7 -n kubernetes-dashboard: exit status 1 (75.961676ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-4bds7" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:285: kubectl --context default-k8s-diff-port-039958 logs kubernetes-dashboard-855c9754f9-4bds7 -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-039958 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-039958
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-039958:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "17ce0a9ee9fa542462b0d8dbad7ff9f52f6cbf753e1df8d514ebc3f81a5fc7d0",
	        "Created": "2025-09-08T12:35:12.669984605Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 943028,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-08T12:36:46.840310426Z",
	            "FinishedAt": "2025-09-08T12:36:45.956332479Z"
	        },
	        "Image": "sha256:863fa02c4a7dcd4571b30c16c1e6ae3eaa1d1a904931aac9472133ae3328c098",
	        "ResolvConfPath": "/var/lib/docker/containers/17ce0a9ee9fa542462b0d8dbad7ff9f52f6cbf753e1df8d514ebc3f81a5fc7d0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/17ce0a9ee9fa542462b0d8dbad7ff9f52f6cbf753e1df8d514ebc3f81a5fc7d0/hostname",
	        "HostsPath": "/var/lib/docker/containers/17ce0a9ee9fa542462b0d8dbad7ff9f52f6cbf753e1df8d514ebc3f81a5fc7d0/hosts",
	        "LogPath": "/var/lib/docker/containers/17ce0a9ee9fa542462b0d8dbad7ff9f52f6cbf753e1df8d514ebc3f81a5fc7d0/17ce0a9ee9fa542462b0d8dbad7ff9f52f6cbf753e1df8d514ebc3f81a5fc7d0-json.log",
	        "Name": "/default-k8s-diff-port-039958",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-039958:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-039958",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "17ce0a9ee9fa542462b0d8dbad7ff9f52f6cbf753e1df8d514ebc3f81a5fc7d0",
	                "LowerDir": "/var/lib/docker/overlay2/8d407d5572896c83cf6726cdeb26724de137d736428759a64b1384325408d1c9-init/diff:/var/lib/docker/overlay2/e9bf8e09f770f60ed4c810e0202629dccf6a4c304822cce5a8dffd900ae12eff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8d407d5572896c83cf6726cdeb26724de137d736428759a64b1384325408d1c9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8d407d5572896c83cf6726cdeb26724de137d736428759a64b1384325408d1c9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8d407d5572896c83cf6726cdeb26724de137d736428759a64b1384325408d1c9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-039958",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-039958/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-039958",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-039958",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-039958",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6b544872b161c42d21f7a410f5d652888f41e145f7481c7e3b7f536351443410",
	            "SandboxKey": "/var/run/docker/netns/6b544872b161",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33483"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33484"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33487"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33485"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33486"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-039958": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:f7:86:36:20:87",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4fa905f06d1f71ba90a71e371fa03071b5fb80803dc7a3e0fd9c709db8b2357f",
	                    "EndpointID": "dfadf3d1154058e3578a14a5abab544a8e725b8838a5ef759f6273dae5ee74d7",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-039958",
	                        "17ce0a9ee9fa"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-039958 -n default-k8s-diff-port-039958
E0908 12:55:37.821590  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/auto-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-039958 logs -n 25
E0908 12:55:39.253290  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/old-k8s-version-896003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-039958 logs -n 25: (1.233470424s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p newest-cni-139998 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-139998            │ jenkins │ v1.36.0 │ 08 Sep 25 12:47 UTC │ 08 Sep 25 12:47 UTC │
	│ stop    │ -p newest-cni-139998 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-139998            │ jenkins │ v1.36.0 │ 08 Sep 25 12:47 UTC │ 08 Sep 25 12:47 UTC │
	│ addons  │ enable dashboard -p newest-cni-139998 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-139998            │ jenkins │ v1.36.0 │ 08 Sep 25 12:47 UTC │ 08 Sep 25 12:47 UTC │
	│ start   │ -p newest-cni-139998 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0 │ newest-cni-139998            │ jenkins │ v1.36.0 │ 08 Sep 25 12:47 UTC │ 08 Sep 25 12:47 UTC │
	│ image   │ newest-cni-139998 image list --format=json                                                                                                                                                                                                    │ newest-cni-139998            │ jenkins │ v1.36.0 │ 08 Sep 25 12:47 UTC │ 08 Sep 25 12:47 UTC │
	│ pause   │ -p newest-cni-139998 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-139998            │ jenkins │ v1.36.0 │ 08 Sep 25 12:47 UTC │ 08 Sep 25 12:47 UTC │
	│ unpause │ -p newest-cni-139998 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-139998            │ jenkins │ v1.36.0 │ 08 Sep 25 12:47 UTC │ 08 Sep 25 12:47 UTC │
	│ delete  │ -p newest-cni-139998                                                                                                                                                                                                                          │ newest-cni-139998            │ jenkins │ v1.36.0 │ 08 Sep 25 12:47 UTC │ 08 Sep 25 12:47 UTC │
	│ delete  │ -p newest-cni-139998                                                                                                                                                                                                                          │ newest-cni-139998            │ jenkins │ v1.36.0 │ 08 Sep 25 12:47 UTC │ 08 Sep 25 12:47 UTC │
	│ delete  │ -p disable-driver-mounts-173021                                                                                                                                                                                                               │ disable-driver-mounts-173021 │ jenkins │ v1.36.0 │ 08 Sep 25 12:47 UTC │ 08 Sep 25 12:47 UTC │
	│ start   │ -p embed-certs-095356 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                        │ embed-certs-095356           │ jenkins │ v1.36.0 │ 08 Sep 25 12:47 UTC │ 08 Sep 25 12:49 UTC │
	│ addons  │ enable metrics-server -p embed-certs-095356 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-095356           │ jenkins │ v1.36.0 │ 08 Sep 25 12:49 UTC │ 08 Sep 25 12:49 UTC │
	│ stop    │ -p embed-certs-095356 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-095356           │ jenkins │ v1.36.0 │ 08 Sep 25 12:49 UTC │ 08 Sep 25 12:49 UTC │
	│ addons  │ enable dashboard -p embed-certs-095356 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-095356           │ jenkins │ v1.36.0 │ 08 Sep 25 12:49 UTC │ 08 Sep 25 12:49 UTC │
	│ start   │ -p embed-certs-095356 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                        │ embed-certs-095356           │ jenkins │ v1.36.0 │ 08 Sep 25 12:49 UTC │ 08 Sep 25 12:50 UTC │
	│ image   │ old-k8s-version-896003 image list --format=json                                                                                                                                                                                               │ old-k8s-version-896003       │ jenkins │ v1.36.0 │ 08 Sep 25 12:54 UTC │ 08 Sep 25 12:54 UTC │
	│ pause   │ -p old-k8s-version-896003 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-896003       │ jenkins │ v1.36.0 │ 08 Sep 25 12:54 UTC │ 08 Sep 25 12:54 UTC │
	│ unpause │ -p old-k8s-version-896003 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-896003       │ jenkins │ v1.36.0 │ 08 Sep 25 12:54 UTC │ 08 Sep 25 12:54 UTC │
	│ delete  │ -p old-k8s-version-896003                                                                                                                                                                                                                     │ old-k8s-version-896003       │ jenkins │ v1.36.0 │ 08 Sep 25 12:54 UTC │ 08 Sep 25 12:54 UTC │
	│ delete  │ -p old-k8s-version-896003                                                                                                                                                                                                                     │ old-k8s-version-896003       │ jenkins │ v1.36.0 │ 08 Sep 25 12:54 UTC │ 08 Sep 25 12:54 UTC │
	│ image   │ no-preload-997730 image list --format=json                                                                                                                                                                                                    │ no-preload-997730            │ jenkins │ v1.36.0 │ 08 Sep 25 12:54 UTC │ 08 Sep 25 12:54 UTC │
	│ pause   │ -p no-preload-997730 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-997730            │ jenkins │ v1.36.0 │ 08 Sep 25 12:54 UTC │ 08 Sep 25 12:55 UTC │
	│ unpause │ -p no-preload-997730 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-997730            │ jenkins │ v1.36.0 │ 08 Sep 25 12:55 UTC │ 08 Sep 25 12:55 UTC │
	│ delete  │ -p no-preload-997730                                                                                                                                                                                                                          │ no-preload-997730            │ jenkins │ v1.36.0 │ 08 Sep 25 12:55 UTC │ 08 Sep 25 12:55 UTC │
	│ delete  │ -p no-preload-997730                                                                                                                                                                                                                          │ no-preload-997730            │ jenkins │ v1.36.0 │ 08 Sep 25 12:55 UTC │ 08 Sep 25 12:55 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 12:49:23
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 12:49:23.279400  968017 out.go:360] Setting OutFile to fd 1 ...
	I0908 12:49:23.279807  968017 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:49:23.279824  968017 out.go:374] Setting ErrFile to fd 2...
	I0908 12:49:23.279829  968017 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:49:23.280064  968017 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-614854/.minikube/bin
	I0908 12:49:23.280789  968017 out.go:368] Setting JSON to false
	I0908 12:49:23.282282  968017 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":12707,"bootTime":1757323056,"procs":271,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 12:49:23.282415  968017 start.go:140] virtualization: kvm guest
	I0908 12:49:23.284711  968017 out.go:179] * [embed-certs-095356] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 12:49:23.286739  968017 out.go:179]   - MINIKUBE_LOCATION=21512
	I0908 12:49:23.286750  968017 notify.go:220] Checking for updates...
	I0908 12:49:23.289669  968017 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 12:49:23.291064  968017 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21512-614854/kubeconfig
	I0908 12:49:23.292333  968017 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-614854/.minikube
	I0908 12:49:23.293647  968017 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 12:49:23.295067  968017 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 12:49:23.296896  968017 config.go:182] Loaded profile config "embed-certs-095356": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 12:49:23.297523  968017 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 12:49:23.323231  968017 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 12:49:23.323393  968017 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:49:23.377796  968017 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:true NGoroutines:73 SystemTime:2025-09-08 12:49:23.367734602 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 12:49:23.377913  968017 docker.go:318] overlay module found
	I0908 12:49:23.379836  968017 out.go:179] * Using the docker driver based on existing profile
	I0908 12:49:23.381063  968017 start.go:304] selected driver: docker
	I0908 12:49:23.381087  968017 start.go:918] validating driver "docker" against &{Name:embed-certs-095356 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:embed-certs-095356 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:49:23.381212  968017 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 12:49:23.382437  968017 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:49:23.441035  968017 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:true NGoroutines:73 SystemTime:2025-09-08 12:49:23.430451531 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 12:49:23.441421  968017 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 12:49:23.441475  968017 cni.go:84] Creating CNI manager for ""
	I0908 12:49:23.441548  968017 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 12:49:23.441616  968017 start.go:348] cluster config:
	{Name:embed-certs-095356 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:embed-certs-095356 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:49:23.444524  968017 out.go:179] * Starting "embed-certs-095356" primary control-plane node in "embed-certs-095356" cluster
	I0908 12:49:23.446148  968017 cache.go:123] Beginning downloading kic base image for docker with crio
	I0908 12:49:23.447633  968017 out.go:179] * Pulling base image v0.0.47-1756980985-21488 ...
	I0908 12:49:23.448890  968017 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 12:49:23.448967  968017 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21512-614854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0908 12:49:23.448984  968017 cache.go:58] Caching tarball of preloaded images
	I0908 12:49:23.449045  968017 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 12:49:23.449154  968017 preload.go:172] Found /home/jenkins/minikube-integration/21512-614854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0908 12:49:23.449170  968017 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0908 12:49:23.449314  968017 profile.go:143] Saving config to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/embed-certs-095356/config.json ...
	I0908 12:49:23.470704  968017 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon, skipping pull
	I0908 12:49:23.470727  968017 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in daemon, skipping load
	I0908 12:49:23.470746  968017 cache.go:232] Successfully downloaded all kic artifacts
	I0908 12:49:23.470778  968017 start.go:360] acquireMachinesLock for embed-certs-095356: {Name:mk9355040c36d7eff54da75f6473007cb8502c78 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 12:49:23.470872  968017 start.go:364] duration metric: took 46.58µs to acquireMachinesLock for "embed-certs-095356"
	I0908 12:49:23.470895  968017 start.go:96] Skipping create...Using existing machine configuration
	I0908 12:49:23.470902  968017 fix.go:54] fixHost starting: 
	I0908 12:49:23.471117  968017 cli_runner.go:164] Run: docker container inspect embed-certs-095356 --format={{.State.Status}}
	I0908 12:49:23.490230  968017 fix.go:112] recreateIfNeeded on embed-certs-095356: state=Stopped err=<nil>
	W0908 12:49:23.490302  968017 fix.go:138] unexpected machine state, will restart: <nil>
	I0908 12:49:23.492246  968017 out.go:252] * Restarting existing docker container for "embed-certs-095356" ...
	I0908 12:49:23.492346  968017 cli_runner.go:164] Run: docker start embed-certs-095356
	I0908 12:49:23.750403  968017 cli_runner.go:164] Run: docker container inspect embed-certs-095356 --format={{.State.Status}}
	I0908 12:49:23.769795  968017 kic.go:430] container "embed-certs-095356" state is running.
	I0908 12:49:23.770316  968017 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-095356
	I0908 12:49:23.790284  968017 profile.go:143] Saving config to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/embed-certs-095356/config.json ...
	I0908 12:49:23.790565  968017 machine.go:93] provisionDockerMachine start ...
	I0908 12:49:23.790652  968017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-095356
	I0908 12:49:23.813467  968017 main.go:141] libmachine: Using SSH client type: native
	I0908 12:49:23.813785  968017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33503 <nil> <nil>}
	I0908 12:49:23.813800  968017 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 12:49:23.814519  968017 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48384->127.0.0.1:33503: read: connection reset by peer
	I0908 12:49:26.939977  968017 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-095356
	
	I0908 12:49:26.940015  968017 ubuntu.go:182] provisioning hostname "embed-certs-095356"
	I0908 12:49:26.940103  968017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-095356
	I0908 12:49:26.960115  968017 main.go:141] libmachine: Using SSH client type: native
	I0908 12:49:26.960359  968017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33503 <nil> <nil>}
	I0908 12:49:26.960375  968017 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-095356 && echo "embed-certs-095356" | sudo tee /etc/hostname
	I0908 12:49:27.098216  968017 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-095356
	
	I0908 12:49:27.098349  968017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-095356
	I0908 12:49:27.119969  968017 main.go:141] libmachine: Using SSH client type: native
	I0908 12:49:27.120236  968017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33503 <nil> <nil>}
	I0908 12:49:27.120258  968017 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-095356' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-095356/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-095356' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 12:49:27.244836  968017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 12:49:27.244884  968017 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21512-614854/.minikube CaCertPath:/home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21512-614854/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21512-614854/.minikube}
	I0908 12:49:27.244919  968017 ubuntu.go:190] setting up certificates
	I0908 12:49:27.244946  968017 provision.go:84] configureAuth start
	I0908 12:49:27.245061  968017 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-095356
	I0908 12:49:27.264701  968017 provision.go:143] copyHostCerts
	I0908 12:49:27.264782  968017 exec_runner.go:144] found /home/jenkins/minikube-integration/21512-614854/.minikube/key.pem, removing ...
	I0908 12:49:27.264800  968017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21512-614854/.minikube/key.pem
	I0908 12:49:27.264866  968017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21512-614854/.minikube/key.pem (1675 bytes)
	I0908 12:49:27.264984  968017 exec_runner.go:144] found /home/jenkins/minikube-integration/21512-614854/.minikube/ca.pem, removing ...
	I0908 12:49:27.264995  968017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21512-614854/.minikube/ca.pem
	I0908 12:49:27.265021  968017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21512-614854/.minikube/ca.pem (1082 bytes)
	I0908 12:49:27.265070  968017 exec_runner.go:144] found /home/jenkins/minikube-integration/21512-614854/.minikube/cert.pem, removing ...
	I0908 12:49:27.265077  968017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21512-614854/.minikube/cert.pem
	I0908 12:49:27.265098  968017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21512-614854/.minikube/cert.pem (1123 bytes)
	I0908 12:49:27.265147  968017 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21512-614854/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca-key.pem org=jenkins.embed-certs-095356 san=[127.0.0.1 192.168.76.2 embed-certs-095356 localhost minikube]
	I0908 12:49:27.478954  968017 provision.go:177] copyRemoteCerts
	I0908 12:49:27.479034  968017 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 12:49:27.479072  968017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-095356
	I0908 12:49:27.497777  968017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/embed-certs-095356/id_rsa Username:docker}
	I0908 12:49:27.594551  968017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 12:49:27.622480  968017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0908 12:49:27.650190  968017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0908 12:49:27.677556  968017 provision.go:87] duration metric: took 432.588736ms to configureAuth
	I0908 12:49:27.677589  968017 ubuntu.go:206] setting minikube options for container-runtime
	I0908 12:49:27.677815  968017 config.go:182] Loaded profile config "embed-certs-095356": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 12:49:27.677938  968017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-095356
	I0908 12:49:27.698245  968017 main.go:141] libmachine: Using SSH client type: native
	I0908 12:49:27.698549  968017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33503 <nil> <nil>}
	I0908 12:49:27.698567  968017 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0908 12:49:28.026101  968017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0908 12:49:28.026153  968017 machine.go:96] duration metric: took 4.235551829s to provisionDockerMachine
	I0908 12:49:28.026167  968017 start.go:293] postStartSetup for "embed-certs-095356" (driver="docker")
	I0908 12:49:28.026181  968017 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 12:49:28.026243  968017 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 12:49:28.026301  968017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-095356
	I0908 12:49:28.047864  968017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/embed-certs-095356/id_rsa Username:docker}
	I0908 12:49:28.141987  968017 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 12:49:28.146300  968017 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0908 12:49:28.146346  968017 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0908 12:49:28.146356  968017 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0908 12:49:28.146366  968017 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0908 12:49:28.146382  968017 filesync.go:126] Scanning /home/jenkins/minikube-integration/21512-614854/.minikube/addons for local assets ...
	I0908 12:49:28.146446  968017 filesync.go:126] Scanning /home/jenkins/minikube-integration/21512-614854/.minikube/files for local assets ...
	I0908 12:49:28.146562  968017 filesync.go:149] local asset: /home/jenkins/minikube-integration/21512-614854/.minikube/files/etc/ssl/certs/6186202.pem -> 6186202.pem in /etc/ssl/certs
	I0908 12:49:28.146690  968017 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0908 12:49:28.157179  968017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/files/etc/ssl/certs/6186202.pem --> /etc/ssl/certs/6186202.pem (1708 bytes)
	I0908 12:49:28.186965  968017 start.go:296] duration metric: took 160.778964ms for postStartSetup
	I0908 12:49:28.187059  968017 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 12:49:28.187106  968017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-095356
	I0908 12:49:28.206758  968017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/embed-certs-095356/id_rsa Username:docker}
	I0908 12:49:28.293425  968017 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0908 12:49:28.298894  968017 fix.go:56] duration metric: took 4.827979324s for fixHost
	I0908 12:49:28.298928  968017 start.go:83] releasing machines lock for "embed-certs-095356", held for 4.828041707s
	I0908 12:49:28.298991  968017 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-095356
	I0908 12:49:28.319080  968017 ssh_runner.go:195] Run: cat /version.json
	I0908 12:49:28.319159  968017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-095356
	I0908 12:49:28.319190  968017 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 12:49:28.319261  968017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-095356
	I0908 12:49:28.340864  968017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/embed-certs-095356/id_rsa Username:docker}
	I0908 12:49:28.342188  968017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/embed-certs-095356/id_rsa Username:docker}
	I0908 12:49:28.428053  968017 ssh_runner.go:195] Run: systemctl --version
	I0908 12:49:28.501265  968017 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0908 12:49:28.645284  968017 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0908 12:49:28.650558  968017 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 12:49:28.659998  968017 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0908 12:49:28.660080  968017 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 12:49:28.669203  968017 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0908 12:49:28.669230  968017 start.go:495] detecting cgroup driver to use...
	I0908 12:49:28.669266  968017 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0908 12:49:28.669311  968017 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 12:49:28.681994  968017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 12:49:28.695114  968017 docker.go:218] disabling cri-docker service (if available) ...
	I0908 12:49:28.695194  968017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 12:49:28.708625  968017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 12:49:28.720641  968017 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 12:49:28.798301  968017 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 12:49:28.880045  968017 docker.go:234] disabling docker service ...
	I0908 12:49:28.880123  968017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 12:49:28.892469  968017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 12:49:28.903906  968017 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 12:49:28.991744  968017 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 12:49:29.072520  968017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 12:49:29.086635  968017 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 12:49:29.104777  968017 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0908 12:49:29.104847  968017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:49:29.115495  968017 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0908 12:49:29.115587  968017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:49:29.126120  968017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:49:29.136593  968017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:49:29.148026  968017 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 12:49:29.157553  968017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:49:29.168412  968017 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:49:29.178655  968017 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:49:29.189820  968017 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 12:49:29.198879  968017 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 12:49:29.208182  968017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:49:29.290790  968017 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0908 12:49:29.417281  968017 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0908 12:49:29.417384  968017 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0908 12:49:29.421280  968017 start.go:563] Will wait 60s for crictl version
	I0908 12:49:29.421346  968017 ssh_runner.go:195] Run: which crictl
	I0908 12:49:29.425224  968017 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 12:49:29.463553  968017 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0908 12:49:29.463638  968017 ssh_runner.go:195] Run: crio --version
	I0908 12:49:29.506438  968017 ssh_runner.go:195] Run: crio --version
	I0908 12:49:29.547947  968017 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0908 12:49:29.549125  968017 cli_runner.go:164] Run: docker network inspect embed-certs-095356 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 12:49:29.567251  968017 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0908 12:49:29.571559  968017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 12:49:29.583638  968017 kubeadm.go:875] updating cluster {Name:embed-certs-095356 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:embed-certs-095356 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 12:49:29.583786  968017 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 12:49:29.583863  968017 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 12:49:29.628331  968017 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 12:49:29.628362  968017 crio.go:433] Images already preloaded, skipping extraction
	I0908 12:49:29.628431  968017 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 12:49:29.667577  968017 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 12:49:29.667607  968017 cache_images.go:85] Images are preloaded, skipping loading
	I0908 12:49:29.667618  968017 kubeadm.go:926] updating node { 192.168.76.2 8443 v1.34.0 crio true true} ...
	I0908 12:49:29.667774  968017 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-095356 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:embed-certs-095356 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 12:49:29.667845  968017 ssh_runner.go:195] Run: crio config
	I0908 12:49:29.714731  968017 cni.go:84] Creating CNI manager for ""
	I0908 12:49:29.714763  968017 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 12:49:29.714778  968017 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 12:49:29.714806  968017 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-095356 NodeName:embed-certs-095356 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 12:49:29.714964  968017 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-095356"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 12:49:29.715064  968017 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 12:49:29.724537  968017 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 12:49:29.724606  968017 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 12:49:29.734183  968017 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0908 12:49:29.752695  968017 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 12:49:29.770346  968017 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I0908 12:49:29.788189  968017 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0908 12:49:29.792659  968017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 12:49:29.806295  968017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:49:29.885492  968017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 12:49:29.899924  968017 certs.go:68] Setting up /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/embed-certs-095356 for IP: 192.168.76.2
	I0908 12:49:29.899947  968017 certs.go:194] generating shared ca certs ...
	I0908 12:49:29.899965  968017 certs.go:226] acquiring lock for ca certs: {Name:mkfa58237bde04d2b800b1fdade18ddbc226533f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:49:29.900170  968017 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21512-614854/.minikube/ca.key
	I0908 12:49:29.900232  968017 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21512-614854/.minikube/proxy-client-ca.key
	I0908 12:49:29.900244  968017 certs.go:256] generating profile certs ...
	I0908 12:49:29.900397  968017 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/embed-certs-095356/client.key
	I0908 12:49:29.900479  968017 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/embed-certs-095356/apiserver.key.351e8f67
	I0908 12:49:29.900529  968017 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/embed-certs-095356/proxy-client.key
	I0908 12:49:29.900673  968017 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/618620.pem (1338 bytes)
	W0908 12:49:29.900723  968017 certs.go:480] ignoring /home/jenkins/minikube-integration/21512-614854/.minikube/certs/618620_empty.pem, impossibly tiny 0 bytes
	I0908 12:49:29.900738  968017 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca-key.pem (1675 bytes)
	I0908 12:49:29.900773  968017 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/ca.pem (1082 bytes)
	I0908 12:49:29.900804  968017 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/cert.pem (1123 bytes)
	I0908 12:49:29.900834  968017 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/certs/key.pem (1675 bytes)
	I0908 12:49:29.900885  968017 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-614854/.minikube/files/etc/ssl/certs/6186202.pem (1708 bytes)
	I0908 12:49:29.901844  968017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 12:49:29.929236  968017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0908 12:49:29.955283  968017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 12:49:30.001144  968017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0908 12:49:30.083811  968017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/embed-certs-095356/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0908 12:49:30.111611  968017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/embed-certs-095356/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0908 12:49:30.137544  968017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/embed-certs-095356/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 12:49:30.162987  968017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/embed-certs-095356/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0908 12:49:30.190308  968017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/files/etc/ssl/certs/6186202.pem --> /usr/share/ca-certificates/6186202.pem (1708 bytes)
	I0908 12:49:30.216267  968017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 12:49:30.241179  968017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-614854/.minikube/certs/618620.pem --> /usr/share/ca-certificates/618620.pem (1338 bytes)
	I0908 12:49:30.266532  968017 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 12:49:30.286676  968017 ssh_runner.go:195] Run: openssl version
	I0908 12:49:30.292793  968017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6186202.pem && ln -fs /usr/share/ca-certificates/6186202.pem /etc/ssl/certs/6186202.pem"
	I0908 12:49:30.302890  968017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6186202.pem
	I0908 12:49:30.307054  968017 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 11:43 /usr/share/ca-certificates/6186202.pem
	I0908 12:49:30.307137  968017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6186202.pem
	I0908 12:49:30.314839  968017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6186202.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 12:49:30.324591  968017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 12:49:30.334856  968017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:49:30.339200  968017 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 11:33 /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:49:30.339265  968017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:49:30.346720  968017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 12:49:30.356744  968017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/618620.pem && ln -fs /usr/share/ca-certificates/618620.pem /etc/ssl/certs/618620.pem"
	I0908 12:49:30.366464  968017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/618620.pem
	I0908 12:49:30.370295  968017 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 11:43 /usr/share/ca-certificates/618620.pem
	I0908 12:49:30.370359  968017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/618620.pem
	I0908 12:49:30.377461  968017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/618620.pem /etc/ssl/certs/51391683.0"
	I0908 12:49:30.387829  968017 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 12:49:30.392212  968017 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0908 12:49:30.399604  968017 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0908 12:49:30.406794  968017 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0908 12:49:30.415234  968017 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0908 12:49:30.424814  968017 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0908 12:49:30.433404  968017 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0908 12:49:30.441261  968017 kubeadm.go:392] StartCluster: {Name:embed-certs-095356 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:embed-certs-095356 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:49:30.441390  968017 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0908 12:49:30.441443  968017 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 12:49:30.495102  968017 cri.go:89] found id: ""
	I0908 12:49:30.495193  968017 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 12:49:30.507375  968017 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0908 12:49:30.507460  968017 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0908 12:49:30.507518  968017 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0908 12:49:30.525592  968017 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0908 12:49:30.526663  968017 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-095356" does not appear in /home/jenkins/minikube-integration/21512-614854/kubeconfig
	I0908 12:49:30.527123  968017 kubeconfig.go:62] /home/jenkins/minikube-integration/21512-614854/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-095356" cluster setting kubeconfig missing "embed-certs-095356" context setting]
	I0908 12:49:30.527890  968017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/kubeconfig: {Name:mkdc8e576f612d76ffebaac15a9174cc9dea2917 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:49:30.529807  968017 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0908 12:49:30.589773  968017 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.76.2
	I0908 12:49:30.589818  968017 kubeadm.go:593] duration metric: took 82.347027ms to restartPrimaryControlPlane
	I0908 12:49:30.589831  968017 kubeadm.go:394] duration metric: took 148.584231ms to StartCluster
	I0908 12:49:30.589855  968017 settings.go:142] acquiring lock: {Name:mk9e6c1dbd6be0735fc1b1285e1ee30836d4ceb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:49:30.589960  968017 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21512-614854/kubeconfig
	I0908 12:49:30.592381  968017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/kubeconfig: {Name:mkdc8e576f612d76ffebaac15a9174cc9dea2917 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:49:30.592824  968017 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 12:49:30.593255  968017 config.go:182] Loaded profile config "embed-certs-095356": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 12:49:30.593363  968017 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0908 12:49:30.593868  968017 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-095356"
	I0908 12:49:30.593970  968017 addons.go:69] Setting metrics-server=true in profile "embed-certs-095356"
	I0908 12:49:30.594004  968017 addons.go:238] Setting addon metrics-server=true in "embed-certs-095356"
	I0908 12:49:30.594027  968017 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-095356"
	W0908 12:49:30.594087  968017 addons.go:247] addon storage-provisioner should already be in state true
	W0908 12:49:30.594034  968017 addons.go:247] addon metrics-server should already be in state true
	I0908 12:49:30.594192  968017 host.go:66] Checking if "embed-certs-095356" exists ...
	I0908 12:49:30.594784  968017 cli_runner.go:164] Run: docker container inspect embed-certs-095356 --format={{.State.Status}}
	I0908 12:49:30.593949  968017 addons.go:69] Setting dashboard=true in profile "embed-certs-095356"
	I0908 12:49:30.594998  968017 addons.go:238] Setting addon dashboard=true in "embed-certs-095356"
	W0908 12:49:30.595035  968017 addons.go:247] addon dashboard should already be in state true
	I0908 12:49:30.593936  968017 addons.go:69] Setting default-storageclass=true in profile "embed-certs-095356"
	I0908 12:49:30.595128  968017 host.go:66] Checking if "embed-certs-095356" exists ...
	I0908 12:49:30.595190  968017 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-095356"
	I0908 12:49:30.595779  968017 cli_runner.go:164] Run: docker container inspect embed-certs-095356 --format={{.State.Status}}
	I0908 12:49:30.595794  968017 cli_runner.go:164] Run: docker container inspect embed-certs-095356 --format={{.State.Status}}
	I0908 12:49:30.596312  968017 out.go:179] * Verifying Kubernetes components...
	I0908 12:49:30.596702  968017 host.go:66] Checking if "embed-certs-095356" exists ...
	I0908 12:49:30.597381  968017 cli_runner.go:164] Run: docker container inspect embed-certs-095356 --format={{.State.Status}}
	I0908 12:49:30.598769  968017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:49:30.627242  968017 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0908 12:49:30.627336  968017 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 12:49:30.629053  968017 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0908 12:49:30.629102  968017 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0908 12:49:30.629240  968017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-095356
	I0908 12:49:30.629609  968017 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 12:49:30.629646  968017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 12:49:30.629710  968017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-095356
	I0908 12:49:30.630069  968017 addons.go:238] Setting addon default-storageclass=true in "embed-certs-095356"
	W0908 12:49:30.630103  968017 addons.go:247] addon default-storageclass should already be in state true
	I0908 12:49:30.630136  968017 host.go:66] Checking if "embed-certs-095356" exists ...
	I0908 12:49:30.630646  968017 cli_runner.go:164] Run: docker container inspect embed-certs-095356 --format={{.State.Status}}
	I0908 12:49:30.636302  968017 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0908 12:49:30.637830  968017 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0908 12:49:30.639045  968017 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0908 12:49:30.639070  968017 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0908 12:49:30.639141  968017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-095356
	I0908 12:49:30.654868  968017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/embed-certs-095356/id_rsa Username:docker}
	I0908 12:49:30.656480  968017 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 12:49:30.656502  968017 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 12:49:30.656564  968017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-095356
	I0908 12:49:30.657798  968017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/embed-certs-095356/id_rsa Username:docker}
	I0908 12:49:30.664788  968017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/embed-certs-095356/id_rsa Username:docker}
	I0908 12:49:30.677548  968017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/embed-certs-095356/id_rsa Username:docker}
	I0908 12:49:30.977337  968017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 12:49:30.993616  968017 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0908 12:49:30.993641  968017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0908 12:49:31.081647  968017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 12:49:31.089969  968017 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0908 12:49:31.090006  968017 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0908 12:49:31.178840  968017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 12:49:31.184530  968017 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0908 12:49:31.184567  968017 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0908 12:49:31.195426  968017 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0908 12:49:31.195464  968017 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0908 12:49:31.294073  968017 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0908 12:49:31.294120  968017 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0908 12:49:31.299350  968017 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 12:49:31.299386  968017 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0908 12:49:31.393389  968017 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0908 12:49:31.393423  968017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0908 12:49:31.397630  968017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 12:49:31.491283  968017 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0908 12:49:31.491319  968017 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0908 12:49:31.514037  968017 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0908 12:49:31.514072  968017 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0908 12:49:31.598921  968017 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0908 12:49:31.599030  968017 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0908 12:49:31.676276  968017 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0908 12:49:31.676308  968017 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0908 12:49:31.702967  968017 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0908 12:49:31.702996  968017 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0908 12:49:31.722900  968017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0908 12:49:34.500652  968017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.523264613s)
	I0908 12:49:34.500807  968017 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.419110464s)
	I0908 12:49:34.500856  968017 node_ready.go:35] waiting up to 6m0s for node "embed-certs-095356" to be "Ready" ...
	I0908 12:49:34.594680  968017 node_ready.go:49] node "embed-certs-095356" is "Ready"
	I0908 12:49:34.594723  968017 node_ready.go:38] duration metric: took 93.848547ms for node "embed-certs-095356" to be "Ready" ...
	I0908 12:49:34.594743  968017 api_server.go:52] waiting for apiserver process to appear ...
	I0908 12:49:34.594802  968017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 12:49:36.709062  968017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.530174828s)
	I0908 12:49:36.709183  968017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.311509171s)
	I0908 12:49:36.709220  968017 addons.go:479] Verifying addon metrics-server=true in "embed-certs-095356"
	I0908 12:49:36.709338  968017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.98638923s)
	I0908 12:49:36.709389  968017 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.114563032s)
	I0908 12:49:36.709423  968017 api_server.go:72] duration metric: took 6.116557346s to wait for apiserver process to appear ...
	I0908 12:49:36.709467  968017 api_server.go:88] waiting for apiserver healthz status ...
	I0908 12:49:36.709490  968017 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0908 12:49:36.711538  968017 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-095356 addons enable metrics-server
	
	I0908 12:49:36.713424  968017 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0908 12:49:36.714376  968017 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 12:49:36.714401  968017 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 12:49:36.714765  968017 addons.go:514] duration metric: took 6.121413185s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0908 12:49:37.209650  968017 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0908 12:49:37.215303  968017 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0908 12:49:37.215336  968017 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0908 12:49:37.709615  968017 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0908 12:49:37.715220  968017 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0908 12:49:37.716454  968017 api_server.go:141] control plane version: v1.34.0
	I0908 12:49:37.716483  968017 api_server.go:131] duration metric: took 1.007008535s to wait for apiserver health ...
	I0908 12:49:37.716492  968017 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 12:49:37.720243  968017 system_pods.go:59] 9 kube-system pods found
	I0908 12:49:37.720291  968017 system_pods.go:61] "coredns-66bc5c9577-vmqhr" [979cc534-4ad1-434a-9838-7712ba9213b4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 12:49:37.720308  968017 system_pods.go:61] "etcd-embed-certs-095356" [302f42a3-4f92-4028-9ef1-60a84815daef] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 12:49:37.720315  968017 system_pods.go:61] "kindnet-8grdw" [8f60ebac-5974-4e25-a33d-1c82b8618cbc] Running
	I0908 12:49:37.720321  968017 system_pods.go:61] "kube-apiserver-embed-certs-095356" [0b2fc077-f650-4ade-8239-810edf4b9d9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 12:49:37.720327  968017 system_pods.go:61] "kube-controller-manager-embed-certs-095356" [b08ed252-184b-4d37-a752-e16594aa4a38] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 12:49:37.720334  968017 system_pods.go:61] "kube-proxy-rk7d4" [7e92a5d1-aac6-48cb-9460-45706cda644d] Running
	I0908 12:49:37.720340  968017 system_pods.go:61] "kube-scheduler-embed-certs-095356" [0df658e9-5081-432d-a4a7-df0a452660ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 12:49:37.720348  968017 system_pods.go:61] "metrics-server-746fcd58dc-kw49k" [ccedbe2e-6b00-427c-aee5-d32661187320] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 12:49:37.720362  968017 system_pods.go:61] "storage-provisioner" [ef3ef697-29b5-4d95-9fce-0d4e14bf1575] Running
	I0908 12:49:37.720370  968017 system_pods.go:74] duration metric: took 3.871512ms to wait for pod list to return data ...
	I0908 12:49:37.720381  968017 default_sa.go:34] waiting for default service account to be created ...
	I0908 12:49:37.723566  968017 default_sa.go:45] found service account: "default"
	I0908 12:49:37.723599  968017 default_sa.go:55] duration metric: took 3.211119ms for default service account to be created ...
	I0908 12:49:37.723612  968017 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 12:49:37.726952  968017 system_pods.go:86] 9 kube-system pods found
	I0908 12:49:37.726991  968017 system_pods.go:89] "coredns-66bc5c9577-vmqhr" [979cc534-4ad1-434a-9838-7712ba9213b4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 12:49:37.727005  968017 system_pods.go:89] "etcd-embed-certs-095356" [302f42a3-4f92-4028-9ef1-60a84815daef] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0908 12:49:37.727018  968017 system_pods.go:89] "kindnet-8grdw" [8f60ebac-5974-4e25-a33d-1c82b8618cbc] Running
	I0908 12:49:37.727028  968017 system_pods.go:89] "kube-apiserver-embed-certs-095356" [0b2fc077-f650-4ade-8239-810edf4b9d9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0908 12:49:37.727037  968017 system_pods.go:89] "kube-controller-manager-embed-certs-095356" [b08ed252-184b-4d37-a752-e16594aa4a38] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0908 12:49:37.727047  968017 system_pods.go:89] "kube-proxy-rk7d4" [7e92a5d1-aac6-48cb-9460-45706cda644d] Running
	I0908 12:49:37.727056  968017 system_pods.go:89] "kube-scheduler-embed-certs-095356" [0df658e9-5081-432d-a4a7-df0a452660ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0908 12:49:37.727068  968017 system_pods.go:89] "metrics-server-746fcd58dc-kw49k" [ccedbe2e-6b00-427c-aee5-d32661187320] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 12:49:37.727077  968017 system_pods.go:89] "storage-provisioner" [ef3ef697-29b5-4d95-9fce-0d4e14bf1575] Running
	I0908 12:49:37.727090  968017 system_pods.go:126] duration metric: took 3.469285ms to wait for k8s-apps to be running ...
	I0908 12:49:37.727103  968017 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 12:49:37.727180  968017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 12:49:37.739211  968017 system_svc.go:56] duration metric: took 12.098934ms WaitForService to wait for kubelet
	I0908 12:49:37.739249  968017 kubeadm.go:578] duration metric: took 7.146380991s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 12:49:37.739275  968017 node_conditions.go:102] verifying NodePressure condition ...
	I0908 12:49:37.742737  968017 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0908 12:49:37.742768  968017 node_conditions.go:123] node cpu capacity is 8
	I0908 12:49:37.742783  968017 node_conditions.go:105] duration metric: took 3.502532ms to run NodePressure ...
	I0908 12:49:37.742797  968017 start.go:241] waiting for startup goroutines ...
	I0908 12:49:37.742806  968017 start.go:246] waiting for cluster config update ...
	I0908 12:49:37.742820  968017 start.go:255] writing updated cluster config ...
	I0908 12:49:37.743123  968017 ssh_runner.go:195] Run: rm -f paused
	I0908 12:49:37.747094  968017 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 12:49:37.751159  968017 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vmqhr" in "kube-system" namespace to be "Ready" or be gone ...
	W0908 12:49:39.757202  968017 pod_ready.go:104] pod "coredns-66bc5c9577-vmqhr" is not "Ready", error: <nil>
	W0908 12:49:41.781793  968017 pod_ready.go:104] pod "coredns-66bc5c9577-vmqhr" is not "Ready", error: <nil>
	W0908 12:49:44.257272  968017 pod_ready.go:104] pod "coredns-66bc5c9577-vmqhr" is not "Ready", error: <nil>
	W0908 12:49:46.757216  968017 pod_ready.go:104] pod "coredns-66bc5c9577-vmqhr" is not "Ready", error: <nil>
	W0908 12:49:49.257899  968017 pod_ready.go:104] pod "coredns-66bc5c9577-vmqhr" is not "Ready", error: <nil>
	W0908 12:49:51.757748  968017 pod_ready.go:104] pod "coredns-66bc5c9577-vmqhr" is not "Ready", error: <nil>
	W0908 12:49:54.257571  968017 pod_ready.go:104] pod "coredns-66bc5c9577-vmqhr" is not "Ready", error: <nil>
	W0908 12:49:56.257624  968017 pod_ready.go:104] pod "coredns-66bc5c9577-vmqhr" is not "Ready", error: <nil>
	W0908 12:49:58.757397  968017 pod_ready.go:104] pod "coredns-66bc5c9577-vmqhr" is not "Ready", error: <nil>
	W0908 12:50:01.256722  968017 pod_ready.go:104] pod "coredns-66bc5c9577-vmqhr" is not "Ready", error: <nil>
	W0908 12:50:03.257559  968017 pod_ready.go:104] pod "coredns-66bc5c9577-vmqhr" is not "Ready", error: <nil>
	W0908 12:50:05.757014  968017 pod_ready.go:104] pod "coredns-66bc5c9577-vmqhr" is not "Ready", error: <nil>
	W0908 12:50:08.257679  968017 pod_ready.go:104] pod "coredns-66bc5c9577-vmqhr" is not "Ready", error: <nil>
	W0908 12:50:10.757606  968017 pod_ready.go:104] pod "coredns-66bc5c9577-vmqhr" is not "Ready", error: <nil>
	W0908 12:50:13.257013  968017 pod_ready.go:104] pod "coredns-66bc5c9577-vmqhr" is not "Ready", error: <nil>
	I0908 12:50:13.758380  968017 pod_ready.go:94] pod "coredns-66bc5c9577-vmqhr" is "Ready"
	I0908 12:50:13.758411  968017 pod_ready.go:86] duration metric: took 36.007221342s for pod "coredns-66bc5c9577-vmqhr" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:50:13.761685  968017 pod_ready.go:83] waiting for pod "etcd-embed-certs-095356" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:50:13.767420  968017 pod_ready.go:94] pod "etcd-embed-certs-095356" is "Ready"
	I0908 12:50:13.767459  968017 pod_ready.go:86] duration metric: took 5.743199ms for pod "etcd-embed-certs-095356" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:50:13.862817  968017 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-095356" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:50:13.869826  968017 pod_ready.go:94] pod "kube-apiserver-embed-certs-095356" is "Ready"
	I0908 12:50:13.869857  968017 pod_ready.go:86] duration metric: took 7.008074ms for pod "kube-apiserver-embed-certs-095356" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:50:13.872326  968017 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-095356" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:50:13.954638  968017 pod_ready.go:94] pod "kube-controller-manager-embed-certs-095356" is "Ready"
	I0908 12:50:13.954671  968017 pod_ready.go:86] duration metric: took 82.317504ms for pod "kube-controller-manager-embed-certs-095356" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:50:14.154975  968017 pod_ready.go:83] waiting for pod "kube-proxy-rk7d4" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:50:14.555035  968017 pod_ready.go:94] pod "kube-proxy-rk7d4" is "Ready"
	I0908 12:50:14.555083  968017 pod_ready.go:86] duration metric: took 400.07973ms for pod "kube-proxy-rk7d4" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:50:14.755134  968017 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-095356" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:50:15.155832  968017 pod_ready.go:94] pod "kube-scheduler-embed-certs-095356" is "Ready"
	I0908 12:50:15.155864  968017 pod_ready.go:86] duration metric: took 400.702953ms for pod "kube-scheduler-embed-certs-095356" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:50:15.155885  968017 pod_ready.go:40] duration metric: took 37.408745743s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 12:50:15.202619  968017 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0908 12:50:15.204368  968017 out.go:179] * Done! kubectl is now configured to use "embed-certs-095356" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 08 12:54:16 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:54:16.507085115Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=4214ea20-891c-4be5-aa70-fd9712ef98a7 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:54:27 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:54:27.506400585Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=0c31bc79-30b1-4a95-b099-faead7cc576b name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:54:27 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:54:27.506658579Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=0c31bc79-30b1-4a95-b099-faead7cc576b name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:54:31 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:54:31.506253787Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=b69b78cf-340e-4ad4-aff4-3791e1e8cce7 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:54:31 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:54:31.506521934Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=b69b78cf-340e-4ad4-aff4-3791e1e8cce7 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:54:40 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:54:40.506157992Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=ccc347d2-af1a-4922-88bd-47e7d0c1a224 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:54:40 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:54:40.506470126Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=ccc347d2-af1a-4922-88bd-47e7d0c1a224 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:54:43 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:54:43.506193066Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=b90a07f1-e01f-4e60-a784-a9a23894367a name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:54:43 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:54:43.506509249Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=b90a07f1-e01f-4e60-a784-a9a23894367a name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:54:51 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:54:51.506331817Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=6423b7e2-caae-47a8-8f8e-d6bd4eda475c name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:54:51 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:54:51.506643375Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=6423b7e2-caae-47a8-8f8e-d6bd4eda475c name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:54:58 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:54:58.506314533Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=7149c1b6-795a-4778-9001-acd13503183f name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:54:58 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:54:58.506670808Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=7149c1b6-795a-4778-9001-acd13503183f name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:55:02 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:55:02.506497120Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=33374709-efe4-488e-968e-b8f2c1bbffea name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:55:02 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:55:02.506794804Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=33374709-efe4-488e-968e-b8f2c1bbffea name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:55:12 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:55:12.505605116Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=8f694620-e9a2-49a2-ae34-ef0cd4690cda name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:55:12 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:55:12.505888819Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=8f694620-e9a2-49a2-ae34-ef0cd4690cda name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:55:14 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:55:14.506333564Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=109b9c21-ac3a-4773-ad08-9fcc09accdda name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:55:14 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:55:14.506604330Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=109b9c21-ac3a-4773-ad08-9fcc09accdda name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:55:24 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:55:24.506446581Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=aaac9470-a9ef-4066-9f42-d9b649314660 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:55:24 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:55:24.506803483Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=aaac9470-a9ef-4066-9f42-d9b649314660 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:55:28 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:55:28.507211542Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=130d8324-3d6f-4551-8948-3b515f0cf1f5 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:55:28 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:55:28.507523055Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=130d8324-3d6f-4551-8948-3b515f0cf1f5 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:55:38 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:55:38.506156151Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=fea53284-822d-4fa8-8dec-acabe774c925 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:55:38 default-k8s-diff-port-039958 crio[679]: time="2025-09-08 12:55:38.506501681Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=fea53284-822d-4fa8-8dec-acabe774c925 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	cb66794fa7ed2       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   2 minutes ago       Exited              dashboard-metrics-scraper   8                   3a81e35b5123f       dashboard-metrics-scraper-6ffb444bf9-d9vtd
	7ccf934f01964       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago      Running             storage-provisioner         2                   1ae25c12f7796       storage-provisioner
	95f26f1fc268d       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c   18 minutes ago      Running             busybox                     1                   11d2bb15190cf       busybox
	4f1209901005e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   18 minutes ago      Running             kindnet-cni                 1                   934e38b5787b0       kindnet-89lwp
	635968dd094ac       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago      Exited              storage-provisioner         1                   1ae25c12f7796       storage-provisioner
	935c541edad30       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   18 minutes ago      Running             kube-proxy                  1                   22569d0f2c042       kube-proxy-cgrs8
	2e649966366d7       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   18 minutes ago      Running             coredns                     1                   3fa4dbe083d4f       coredns-66bc5c9577-gb4rh
	cfa239a4247ff       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   18 minutes ago      Running             kube-controller-manager     1                   e057f4f3944db       kube-controller-manager-default-k8s-diff-port-039958
	9ba8aa1de66dd       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   18 minutes ago      Running             kube-scheduler              1                   accbbe8633db4       kube-scheduler-default-k8s-diff-port-039958
	95537c4837bf6       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   18 minutes ago      Running             kube-apiserver              1                   955317795ded0       kube-apiserver-default-k8s-diff-port-039958
	cc7207e2cb8e1       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   18 minutes ago      Running             etcd                        1                   11a0c9dbb498e       etcd-default-k8s-diff-port-039958
	
	
	==> coredns [2e649966366d752380cbb3e0cb8ec21cbe00581553b49ad9f2b8bc8219424879] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37144 - 748 "HINFO IN 980441565112902190.8224468124574063050. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.022189332s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-039958
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-039958
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a399eb27affc71ce2737faeeac659fc2ce938c64
	                    minikube.k8s.io/name=default-k8s-diff-port-039958
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T12_35_35_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 12:35:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-039958
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 12:55:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 12:53:07 +0000   Mon, 08 Sep 2025 12:35:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 12:53:07 +0000   Mon, 08 Sep 2025 12:35:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 12:53:07 +0000   Mon, 08 Sep 2025 12:35:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 12:53:07 +0000   Mon, 08 Sep 2025 12:36:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-039958
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859360Ki
	  pods:               110
	System Info:
	  Machine ID:                 fb4822e57e9a4bda9fd6d32cc8567a71
	  System UUID:                c1a4b17b-d533-4931-9b70-905556f15444
	  Boot ID:                    1bb31c1a-3b78-4ea1-9977-d0689f279875
	  Kernel Version:             5.15.0-1083-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-66bc5c9577-gb4rh                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     19m
	  kube-system                 etcd-default-k8s-diff-port-039958                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         20m
	  kube-system                 kindnet-89lwp                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-default-k8s-diff-port-039958             250m (3%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-039958    200m (2%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-proxy-cgrs8                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-default-k8s-diff-port-039958             100m (1%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 metrics-server-746fcd58dc-hvqdm                         100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         19m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-d9vtd              0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-4bds7                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 19m                kube-proxy       
	  Normal   Starting                 18m                kube-proxy       
	  Normal   NodeHasSufficientPID     20m                kubelet          Node default-k8s-diff-port-039958 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 20m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  20m                kubelet          Node default-k8s-diff-port-039958 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    20m                kubelet          Node default-k8s-diff-port-039958 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 20m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           20m                node-controller  Node default-k8s-diff-port-039958 event: Registered Node default-k8s-diff-port-039958 in Controller
	  Normal   NodeReady                19m                kubelet          Node default-k8s-diff-port-039958 status is now: NodeReady
	  Normal   Starting                 18m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 18m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-039958 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-039958 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-039958 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           18m                node-controller  Node default-k8s-diff-port-039958 event: Registered Node default-k8s-diff-port-039958 in Controller
	
	
	==> dmesg <==
	[  +0.000002] ll header: 00000000: e2 66 ae f3 69 01 3a b2 9c 4b bf 51 08 00
	[  +1.006042] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-aec4dfba5a81
	[  +0.000007] ll header: 00000000: e2 66 ae f3 69 01 3a b2 9c 4b bf 51 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-aec4dfba5a81
	[  +0.000002] ll header: 00000000: e2 66 ae f3 69 01 3a b2 9c 4b bf 51 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-aec4dfba5a81
	[  +0.000001] ll header: 00000000: e2 66 ae f3 69 01 3a b2 9c 4b bf 51 08 00
	[  +2.015807] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-aec4dfba5a81
	[  +0.000007] ll header: 00000000: e2 66 ae f3 69 01 3a b2 9c 4b bf 51 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-aec4dfba5a81
	[  +0.000003] ll header: 00000000: e2 66 ae f3 69 01 3a b2 9c 4b bf 51 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-aec4dfba5a81
	[  +0.000001] ll header: 00000000: e2 66 ae f3 69 01 3a b2 9c 4b bf 51 08 00
	[  +4.251670] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-aec4dfba5a81
	[  +0.000007] ll header: 00000000: e2 66 ae f3 69 01 3a b2 9c 4b bf 51 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-aec4dfba5a81
	[  +0.000002] ll header: 00000000: e2 66 ae f3 69 01 3a b2 9c 4b bf 51 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-aec4dfba5a81
	[  +0.000001] ll header: 00000000: e2 66 ae f3 69 01 3a b2 9c 4b bf 51 08 00
	[  +8.195202] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-aec4dfba5a81
	[  +0.000006] ll header: 00000000: e2 66 ae f3 69 01 3a b2 9c 4b bf 51 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-aec4dfba5a81
	[  +0.000002] ll header: 00000000: e2 66 ae f3 69 01 3a b2 9c 4b bf 51 08 00
	[  +0.000008] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-aec4dfba5a81
	[  +0.000001] ll header: 00000000: e2 66 ae f3 69 01 3a b2 9c 4b bf 51 08 00
	
	
	==> etcd [cc7207e2cb8e143c45822375891cfe394fc5a0816d16278c556c510b63826bbc] <==
	{"level":"warn","ts":"2025-09-08T12:36:57.690913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:57.699050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:57.723968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:57.776532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:57.784784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:57.793284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:57.803005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:57.813268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:57.876089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:57.884377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:57.893461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:57.910948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:57.918606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:57.980614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:57.989023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:57.997816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:36:58.105006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58392","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-08T12:46:57.099129Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":996}
	{"level":"warn","ts":"2025-09-08T12:46:57.100113Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"130.871308ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873788529878759400 username:\"kube-apiserver-etcd-client\" auth_revision:1 > compaction:<revision:996 > ","response":"size:5"}
	{"level":"info","ts":"2025-09-08T12:46:57.100253Z","caller":"traceutil/trace.go:172","msg":"trace[1611255491] compact","detail":"{revision:996; response_revision:1285; }","duration":"201.910065ms","start":"2025-09-08T12:46:56.898325Z","end":"2025-09-08T12:46:57.100235Z","steps":["trace[1611255491] 'process raft request'  (duration: 69.880893ms)","trace[1611255491] 'check and update compact revision'  (duration: 130.707291ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-08T12:46:57.169791Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":996,"took":"70.287987ms","hash":3912092366,"current-db-size-bytes":3223552,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":3223552,"current-db-size-in-use":"3.2 MB"}
	{"level":"info","ts":"2025-09-08T12:46:57.169856Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3912092366,"revision":996,"compact-revision":-1}
	{"level":"info","ts":"2025-09-08T12:51:57.105612Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1285}
	{"level":"info","ts":"2025-09-08T12:51:57.108637Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1285,"took":"2.692935ms","hash":219553704,"current-db-size-bytes":3223552,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1945600,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-09-08T12:51:57.108691Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":219553704,"revision":1285,"compact-revision":996}
	
	
	==> kernel <==
	 12:55:39 up  3:38,  0 users,  load average: 0.92, 1.10, 1.43
	Linux default-k8s-diff-port-039958 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [4f1209901005ec30cd4049c8735ca3659731f92942cb51582531c6ce3676c955] <==
	I0908 12:53:31.290317       1 main.go:301] handling current node
	I0908 12:53:41.287802       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0908 12:53:41.287871       1 main.go:301] handling current node
	I0908 12:53:51.291751       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0908 12:53:51.291790       1 main.go:301] handling current node
	I0908 12:54:01.288079       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0908 12:54:01.288125       1 main.go:301] handling current node
	I0908 12:54:11.291773       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0908 12:54:11.291821       1 main.go:301] handling current node
	I0908 12:54:21.292661       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0908 12:54:21.292698       1 main.go:301] handling current node
	I0908 12:54:31.288662       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0908 12:54:31.288711       1 main.go:301] handling current node
	I0908 12:54:41.293342       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0908 12:54:41.293389       1 main.go:301] handling current node
	I0908 12:54:51.291767       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0908 12:54:51.291804       1 main.go:301] handling current node
	I0908 12:55:01.287781       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0908 12:55:01.287928       1 main.go:301] handling current node
	I0908 12:55:11.292148       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0908 12:55:11.292187       1 main.go:301] handling current node
	I0908 12:55:21.292058       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0908 12:55:21.292108       1 main.go:301] handling current node
	I0908 12:55:31.287922       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0908 12:55:31.287959       1 main.go:301] handling current node
	
	
	==> kube-apiserver [95537c4837bf61514d4f2e6f9aa4ef0a11b66d9c147b7817c16eeb8016929989] <==
	I0908 12:51:59.922789       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0908 12:52:21.092076       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:52:42.289399       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0908 12:52:59.922123       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 12:52:59.922187       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0908 12:52:59.922205       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0908 12:52:59.923307       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 12:52:59.923402       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0908 12:52:59.923416       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0908 12:53:34.469332       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:54:06.240854       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0908 12:54:59.923241       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 12:54:59.923308       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0908 12:54:59.923324       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0908 12:54:59.923793       1 handler_proxy.go:99] no RequestInfo found in the context
	E0908 12:54:59.923978       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0908 12:54:59.925131       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0908 12:55:02.312451       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:55:27.752170       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [cfa239a4247ff63369ba72728e0c8dcded1b41e1da1837fa9b00ec1565c72fa8] <==
	I0908 12:49:34.517101       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:50:04.403591       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:50:04.525575       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:50:34.408472       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:50:34.533126       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:51:04.412928       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:51:04.540430       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:51:34.417220       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:51:34.549301       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:52:04.422576       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:52:04.557449       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:52:34.427187       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:52:34.566015       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:53:04.431915       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:53:04.574153       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:53:34.436500       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:53:34.583128       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:54:04.441157       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:54:04.590945       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:54:34.445574       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:54:34.597684       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:55:04.450991       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:55:04.605038       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0908 12:55:34.455262       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 12:55:34.612186       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [935c541edad304c491ccb32b3c28fdada3be3f6a8ec4b9dea337c1ce6a25e312] <==
	I0908 12:37:01.004027       1 server_linux.go:53] "Using iptables proxy"
	I0908 12:37:01.229631       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 12:37:01.330514       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 12:37:01.330561       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E0908 12:37:01.330669       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 12:37:01.351166       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 12:37:01.351234       1 server_linux.go:132] "Using iptables Proxier"
	I0908 12:37:01.355489       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 12:37:01.355897       1 server.go:527] "Version info" version="v1.34.0"
	I0908 12:37:01.355914       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 12:37:01.356973       1 config.go:106] "Starting endpoint slice config controller"
	I0908 12:37:01.357086       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 12:37:01.356987       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 12:37:01.357167       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 12:37:01.356995       1 config.go:200] "Starting service config controller"
	I0908 12:37:01.357193       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 12:37:01.357051       1 config.go:309] "Starting node config controller"
	I0908 12:37:01.357219       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 12:37:01.357225       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 12:37:01.457824       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0908 12:37:01.457830       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 12:37:01.457887       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [9ba8aa1de66dd67173b1fe0009e5705648249811349c1d6abfeb23f588943eaf] <==
	I0908 12:36:56.616494       1 serving.go:386] Generated self-signed cert in-memory
	W0908 12:36:58.826652       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0908 12:36:58.826686       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0908 12:36:58.826696       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0908 12:36:58.826703       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0908 12:36:58.990632       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0908 12:36:58.990762       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 12:36:58.995308       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 12:36:58.995372       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 12:36:58.996067       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0908 12:36:58.996168       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0908 12:36:59.096535       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 08 12:54:54 default-k8s-diff-port-039958 kubelet[821]: E0908 12:54:54.748020     821 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757336094747738781  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 08 12:54:54 default-k8s-diff-port-039958 kubelet[821]: E0908 12:54:54.748069     821 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757336094747738781  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 08 12:54:58 default-k8s-diff-port-039958 kubelet[821]: E0908 12:54:58.507059     821 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4bds7" podUID="81cc6553-f21a-4023-ba22-ee82ccc64adb"
	Sep 08 12:55:01 default-k8s-diff-port-039958 kubelet[821]: I0908 12:55:01.505951     821 scope.go:117] "RemoveContainer" containerID="cb66794fa7ed2fd260ef29a1b9b5704845ecff402bb0e989cbef2231947f9523"
	Sep 08 12:55:01 default-k8s-diff-port-039958 kubelet[821]: E0908 12:55:01.506208     821 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-d9vtd_kubernetes-dashboard(c29d2f54-4eb1-4591-aca1-6507b6c73788)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d9vtd" podUID="c29d2f54-4eb1-4591-aca1-6507b6c73788"
	Sep 08 12:55:02 default-k8s-diff-port-039958 kubelet[821]: E0908 12:55:02.507161     821 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-hvqdm" podUID="d648640c-2cab-4575-8290-51c39f0a19b3"
	Sep 08 12:55:04 default-k8s-diff-port-039958 kubelet[821]: E0908 12:55:04.749225     821 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757336104748941276  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 08 12:55:04 default-k8s-diff-port-039958 kubelet[821]: E0908 12:55:04.749265     821 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757336104748941276  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 08 12:55:12 default-k8s-diff-port-039958 kubelet[821]: I0908 12:55:12.505234     821 scope.go:117] "RemoveContainer" containerID="cb66794fa7ed2fd260ef29a1b9b5704845ecff402bb0e989cbef2231947f9523"
	Sep 08 12:55:12 default-k8s-diff-port-039958 kubelet[821]: E0908 12:55:12.505388     821 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-d9vtd_kubernetes-dashboard(c29d2f54-4eb1-4591-aca1-6507b6c73788)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d9vtd" podUID="c29d2f54-4eb1-4591-aca1-6507b6c73788"
	Sep 08 12:55:12 default-k8s-diff-port-039958 kubelet[821]: E0908 12:55:12.506162     821 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4bds7" podUID="81cc6553-f21a-4023-ba22-ee82ccc64adb"
	Sep 08 12:55:14 default-k8s-diff-port-039958 kubelet[821]: E0908 12:55:14.506892     821 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-hvqdm" podUID="d648640c-2cab-4575-8290-51c39f0a19b3"
	Sep 08 12:55:14 default-k8s-diff-port-039958 kubelet[821]: E0908 12:55:14.750506     821 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757336114750250491  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 08 12:55:14 default-k8s-diff-port-039958 kubelet[821]: E0908 12:55:14.750548     821 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757336114750250491  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 08 12:55:24 default-k8s-diff-port-039958 kubelet[821]: E0908 12:55:24.507150     821 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4bds7" podUID="81cc6553-f21a-4023-ba22-ee82ccc64adb"
	Sep 08 12:55:24 default-k8s-diff-port-039958 kubelet[821]: E0908 12:55:24.753140     821 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757336124752170490  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 08 12:55:24 default-k8s-diff-port-039958 kubelet[821]: E0908 12:55:24.753199     821 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757336124752170490  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 08 12:55:25 default-k8s-diff-port-039958 kubelet[821]: I0908 12:55:25.505198     821 scope.go:117] "RemoveContainer" containerID="cb66794fa7ed2fd260ef29a1b9b5704845ecff402bb0e989cbef2231947f9523"
	Sep 08 12:55:25 default-k8s-diff-port-039958 kubelet[821]: E0908 12:55:25.505417     821 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-d9vtd_kubernetes-dashboard(c29d2f54-4eb1-4591-aca1-6507b6c73788)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d9vtd" podUID="c29d2f54-4eb1-4591-aca1-6507b6c73788"
	Sep 08 12:55:28 default-k8s-diff-port-039958 kubelet[821]: E0908 12:55:28.507850     821 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-hvqdm" podUID="d648640c-2cab-4575-8290-51c39f0a19b3"
	Sep 08 12:55:34 default-k8s-diff-port-039958 kubelet[821]: E0908 12:55:34.754371     821 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757336134754158761  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 08 12:55:34 default-k8s-diff-port-039958 kubelet[821]: E0908 12:55:34.754417     821 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757336134754158761  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 08 12:55:36 default-k8s-diff-port-039958 kubelet[821]: I0908 12:55:36.505464     821 scope.go:117] "RemoveContainer" containerID="cb66794fa7ed2fd260ef29a1b9b5704845ecff402bb0e989cbef2231947f9523"
	Sep 08 12:55:36 default-k8s-diff-port-039958 kubelet[821]: E0908 12:55:36.505728     821 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-d9vtd_kubernetes-dashboard(c29d2f54-4eb1-4591-aca1-6507b6c73788)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d9vtd" podUID="c29d2f54-4eb1-4591-aca1-6507b6c73788"
	Sep 08 12:55:38 default-k8s-diff-port-039958 kubelet[821]: E0908 12:55:38.506953     821 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4bds7" podUID="81cc6553-f21a-4023-ba22-ee82ccc64adb"
	
	
	==> storage-provisioner [635968dd094acd324b55a3848301449f01f2e1335bc2775c4064ec3ff9ef0a65] <==
	I0908 12:37:00.878178       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0908 12:37:30.882001       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [7ccf934f01964d65bd372427ec74cbe04850be842ff2c31f93e83c05f7335fa9] <==
	W0908 12:55:13.886996       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:55:15.890685       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:55:15.895410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:55:17.899276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:55:17.904211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:55:19.907998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:55:19.914733       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:55:21.918169       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:55:21.922385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:55:23.925978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:55:23.932278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:55:25.935643       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:55:25.940960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:55:27.944810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:55:27.950800       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:55:29.954546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:55:29.959010       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:55:31.962427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:55:31.967080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:55:33.970867       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:55:33.976460       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:55:35.979969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:55:35.985180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:55:37.989236       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:55:37.995148       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-039958 -n default-k8s-diff-port-039958
E0908 12:55:39.621270  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/no-preload-997730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:55:39.627752  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/no-preload-997730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:55:39.639208  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/no-preload-997730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:55:39.661020  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/no-preload-997730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:55:39.702476  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/no-preload-997730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-039958 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
E0908 12:55:39.783873  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/no-preload-997730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-hvqdm kubernetes-dashboard-855c9754f9-4bds7
helpers_test.go:282: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-039958 describe pod metrics-server-746fcd58dc-hvqdm kubernetes-dashboard-855c9754f9-4bds7
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-039958 describe pod metrics-server-746fcd58dc-hvqdm kubernetes-dashboard-855c9754f9-4bds7: exit status 1 (62.789994ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-hvqdm" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-4bds7" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context default-k8s-diff-port-039958 describe pod metrics-server-746fcd58dc-hvqdm kubernetes-dashboard-855c9754f9-4bds7: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.57s)

                                                
                                    

Test pass (278/325)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.56
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.0/json-events 4.68
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.07
18 TestDownloadOnly/v1.34.0/DeleteAll 0.24
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 1.19
21 TestBinaryMirror 0.84
22 TestOffline 91.08
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 161.39
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 10.48
35 TestAddons/parallel/Registry 15.59
36 TestAddons/parallel/RegistryCreds 0.64
38 TestAddons/parallel/InspektorGadget 5.3
39 TestAddons/parallel/MetricsServer 5.82
42 TestAddons/parallel/Headlamp 21.66
43 TestAddons/parallel/CloudSpanner 5.5
44 TestAddons/parallel/LocalPath 12.16
45 TestAddons/parallel/NvidiaDevicePlugin 5.96
46 TestAddons/parallel/Yakd 11.77
47 TestAddons/parallel/AmdGpuDevicePlugin 6.54
48 TestAddons/StoppedEnableDisable 12.21
49 TestCertOptions 28.02
50 TestCertExpiration 227.15
52 TestForceSystemdFlag 31.79
53 TestForceSystemdEnv 44.67
55 TestKVMDriverInstallOrUpdate 1.27
59 TestErrorSpam/setup 23.96
60 TestErrorSpam/start 0.62
61 TestErrorSpam/status 0.93
62 TestErrorSpam/pause 1.66
63 TestErrorSpam/unpause 1.69
64 TestErrorSpam/stop 1.41
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 70.25
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 28.65
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.06
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.02
76 TestFunctional/serial/CacheCmd/cache/add_local 1.03
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.84
81 TestFunctional/serial/CacheCmd/cache/delete 0.12
82 TestFunctional/serial/MinikubeKubectlCmd 0.12
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 39.21
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.48
87 TestFunctional/serial/LogsFileCmd 1.53
88 TestFunctional/serial/InvalidService 3.99
90 TestFunctional/parallel/ConfigCmd 0.43
92 TestFunctional/parallel/DryRun 0.37
93 TestFunctional/parallel/InternationalLanguage 0.16
94 TestFunctional/parallel/StatusCmd 0.91
99 TestFunctional/parallel/AddonsCmd 0.14
102 TestFunctional/parallel/SSHCmd 0.52
103 TestFunctional/parallel/CpCmd 1.82
105 TestFunctional/parallel/FileSync 0.29
106 TestFunctional/parallel/CertSync 1.74
110 TestFunctional/parallel/NodeLabels 0.07
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.6
114 TestFunctional/parallel/License 0.28
116 TestFunctional/parallel/Version/short 0.06
117 TestFunctional/parallel/Version/components 0.57
118 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
119 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
120 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
121 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
122 TestFunctional/parallel/ImageCommands/ImageBuild 2.73
123 TestFunctional/parallel/ImageCommands/Setup 0.52
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.49
125 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
126 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
127 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.94
130 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.4
131 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.11
132 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
135 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.5
136 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
137 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.76
138 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.55
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
144 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
145 TestFunctional/parallel/ProfileCmd/profile_list 0.38
146 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
147 TestFunctional/parallel/MountCmd/any-port 49.81
148 TestFunctional/parallel/MountCmd/specific-port 1.73
149 TestFunctional/parallel/MountCmd/VerifyCleanup 1.51
150 TestFunctional/parallel/ServiceCmd/List 1.69
151 TestFunctional/parallel/ServiceCmd/JSONOutput 1.69
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 177.74
163 TestMultiControlPlane/serial/DeployApp 5.91
164 TestMultiControlPlane/serial/PingHostFromPods 1.13
165 TestMultiControlPlane/serial/AddWorkerNode 57.73
166 TestMultiControlPlane/serial/NodeLabels 0.06
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.88
168 TestMultiControlPlane/serial/CopyFile 16.66
169 TestMultiControlPlane/serial/StopSecondaryNode 12.6
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.72
171 TestMultiControlPlane/serial/RestartSecondaryNode 33.33
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.19
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 130.58
174 TestMultiControlPlane/serial/DeleteSecondaryNode 11.57
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.67
176 TestMultiControlPlane/serial/StopCluster 35.76
177 TestMultiControlPlane/serial/RestartCluster 54.45
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.69
179 TestMultiControlPlane/serial/AddSecondaryNode 42.37
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.91
184 TestJSONOutput/start/Command 74.24
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Command 0.72
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/unpause/Command 0.62
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 5.81
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.23
209 TestKicCustomNetwork/create_custom_network 29.5
210 TestKicCustomNetwork/use_default_bridge_network 27.27
211 TestKicExistingNetwork 23.55
212 TestKicCustomSubnet 29.73
213 TestKicStaticIP 24.95
214 TestMainNoArgs 0.06
215 TestMinikubeProfile 54.67
218 TestMountStart/serial/StartWithMountFirst 8.24
219 TestMountStart/serial/VerifyMountFirst 0.26
220 TestMountStart/serial/StartWithMountSecond 5.59
221 TestMountStart/serial/VerifyMountSecond 0.26
222 TestMountStart/serial/DeleteFirst 1.65
223 TestMountStart/serial/VerifyMountPostDelete 0.26
224 TestMountStart/serial/Stop 1.19
225 TestMountStart/serial/RestartStopped 7.4
226 TestMountStart/serial/VerifyMountPostStop 0.25
229 TestMultiNode/serial/FreshStart2Nodes 96.55
230 TestMultiNode/serial/DeployApp2Nodes 4.87
231 TestMultiNode/serial/PingHostFrom2Pods 0.79
232 TestMultiNode/serial/AddNode 54.58
233 TestMultiNode/serial/MultiNodeLabels 0.06
234 TestMultiNode/serial/ProfileList 0.64
235 TestMultiNode/serial/CopyFile 9.41
236 TestMultiNode/serial/StopNode 2.15
237 TestMultiNode/serial/StartAfterStop 7.42
238 TestMultiNode/serial/RestartKeepsNodes 80.26
239 TestMultiNode/serial/DeleteNode 5.33
240 TestMultiNode/serial/StopMultiNode 23.84
241 TestMultiNode/serial/RestartMultiNode 56
242 TestMultiNode/serial/ValidateNameConflict 24.77
247 TestPreload 115.87
252 TestInsufficientStorage 12.93
253 TestRunningBinaryUpgrade 47.06
255 TestKubernetesUpgrade 343.16
256 TestMissingContainerUpgrade 72.38
258 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
259 TestStoppedBinaryUpgrade/Setup 0.55
260 TestNoKubernetes/serial/StartWithK8s 41.26
261 TestStoppedBinaryUpgrade/Upgrade 63.12
262 TestNoKubernetes/serial/StartWithStopK8s 27.97
270 TestNetworkPlugins/group/false 3.37
274 TestStoppedBinaryUpgrade/MinikubeLogs 1
275 TestNoKubernetes/serial/Start 10.31
276 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
277 TestNoKubernetes/serial/ProfileList 2.05
278 TestNoKubernetes/serial/Stop 1.27
279 TestNoKubernetes/serial/StartNoArgs 6.66
280 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
289 TestPause/serial/Start 76.68
290 TestPause/serial/SecondStartNoReconfiguration 29.17
291 TestNetworkPlugins/group/auto/Start 75.23
292 TestPause/serial/Pause 0.74
293 TestPause/serial/VerifyStatus 0.31
294 TestPause/serial/Unpause 0.66
295 TestPause/serial/PauseAgain 0.9
296 TestPause/serial/DeletePaused 2.97
297 TestPause/serial/VerifyDeletedResources 0.79
298 TestNetworkPlugins/group/kindnet/Start 72.92
299 TestNetworkPlugins/group/auto/KubeletFlags 0.28
300 TestNetworkPlugins/group/auto/NetCatPod 10.21
301 TestNetworkPlugins/group/auto/DNS 0.14
302 TestNetworkPlugins/group/auto/Localhost 0.12
303 TestNetworkPlugins/group/auto/HairPin 0.12
305 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
306 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
307 TestNetworkPlugins/group/kindnet/NetCatPod 9.2
308 TestNetworkPlugins/group/kindnet/DNS 0.14
309 TestNetworkPlugins/group/kindnet/Localhost 0.12
310 TestNetworkPlugins/group/kindnet/HairPin 0.12
311 TestNetworkPlugins/group/custom-flannel/Start 50.42
312 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
313 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.21
314 TestNetworkPlugins/group/custom-flannel/DNS 0.15
315 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
316 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
317 TestNetworkPlugins/group/enable-default-cni/Start 70.11
318 TestNetworkPlugins/group/flannel/Start 56.5
319 TestNetworkPlugins/group/bridge/Start 70.77
320 TestNetworkPlugins/group/flannel/ControllerPod 6.01
321 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
322 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.2
323 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
324 TestNetworkPlugins/group/flannel/NetCatPod 9.22
325 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
326 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
327 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
328 TestNetworkPlugins/group/flannel/DNS 0.13
329 TestNetworkPlugins/group/flannel/Localhost 0.11
330 TestNetworkPlugins/group/flannel/HairPin 0.11
331 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
332 TestNetworkPlugins/group/bridge/NetCatPod 9.23
334 TestStartStop/group/old-k8s-version/serial/FirstStart 53.03
336 TestStartStop/group/no-preload/serial/FirstStart 57.31
337 TestNetworkPlugins/group/bridge/DNS 0.19
338 TestNetworkPlugins/group/bridge/Localhost 0.2
339 TestNetworkPlugins/group/bridge/HairPin 0.22
341 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 77.46
342 TestStartStop/group/old-k8s-version/serial/DeployApp 9.29
343 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.08
344 TestStartStop/group/old-k8s-version/serial/Stop 12.1
345 TestStartStop/group/no-preload/serial/DeployApp 9.41
346 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.98
347 TestStartStop/group/no-preload/serial/Stop 11.99
348 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
349 TestStartStop/group/old-k8s-version/serial/SecondStart 46.89
350 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
351 TestStartStop/group/no-preload/serial/SecondStart 52.47
352 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.25
353 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.99
354 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.99
356 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
357 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 48.03
364 TestStartStop/group/newest-cni/serial/FirstStart 33.03
365 TestStartStop/group/newest-cni/serial/DeployApp 0
366 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.9
367 TestStartStop/group/newest-cni/serial/Stop 1.21
368 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
369 TestStartStop/group/newest-cni/serial/SecondStart 15.21
370 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
371 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
372 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
373 TestStartStop/group/newest-cni/serial/Pause 3.03
375 TestStartStop/group/embed-certs/serial/FirstStart 72.76
376 TestStartStop/group/embed-certs/serial/DeployApp 9.25
377 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.99
378 TestStartStop/group/embed-certs/serial/Stop 11.92
379 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
380 TestStartStop/group/embed-certs/serial/SecondStart 52.33
381 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 476.01
382 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
383 TestStartStop/group/old-k8s-version/serial/Pause 2.86
384 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
385 TestStartStop/group/no-preload/serial/Pause 2.8
386 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
387 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.82
388 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
389 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
390 TestStartStop/group/embed-certs/serial/Pause 2.72
x
+
TestDownloadOnly/v1.28.0/json-events (5.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-829747 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-829747 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.558577035s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0908 11:33:06.308956  618620 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
I0908 11:33:06.309052  618620 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21512-614854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-829747
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-829747: exit status 85 (70.383689ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-829747 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-829747 │ jenkins │ v1.36.0 │ 08 Sep 25 11:33 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 11:33:00
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 11:33:00.799111  618632 out.go:360] Setting OutFile to fd 1 ...
	I0908 11:33:00.799378  618632 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:33:00.799394  618632 out.go:374] Setting ErrFile to fd 2...
	I0908 11:33:00.799399  618632 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:33:00.799626  618632 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-614854/.minikube/bin
	W0908 11:33:00.799798  618632 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21512-614854/.minikube/config/config.json: open /home/jenkins/minikube-integration/21512-614854/.minikube/config/config.json: no such file or directory
	I0908 11:33:00.800439  618632 out.go:368] Setting JSON to true
	I0908 11:33:00.801470  618632 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":8125,"bootTime":1757323056,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 11:33:00.801599  618632 start.go:140] virtualization: kvm guest
	I0908 11:33:00.803920  618632 out.go:99] [download-only-829747] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	W0908 11:33:00.804119  618632 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21512-614854/.minikube/cache/preloaded-tarball: no such file or directory
	I0908 11:33:00.804211  618632 notify.go:220] Checking for updates...
	I0908 11:33:00.805564  618632 out.go:171] MINIKUBE_LOCATION=21512
	I0908 11:33:00.807045  618632 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 11:33:00.808401  618632 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21512-614854/kubeconfig
	I0908 11:33:00.809874  618632 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-614854/.minikube
	I0908 11:33:00.811551  618632 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0908 11:33:00.813921  618632 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0908 11:33:00.814257  618632 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 11:33:00.842231  618632 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 11:33:00.842349  618632 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 11:33:00.893253  618632 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:62 SystemTime:2025-09-08 11:33:00.883703382 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 11:33:00.893372  618632 docker.go:318] overlay module found
	I0908 11:33:00.895227  618632 out.go:99] Using the docker driver based on user configuration
	I0908 11:33:00.895268  618632 start.go:304] selected driver: docker
	I0908 11:33:00.895276  618632 start.go:918] validating driver "docker" against <nil>
	I0908 11:33:00.895379  618632 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 11:33:00.945993  618632 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:62 SystemTime:2025-09-08 11:33:00.936844779 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 11:33:00.946240  618632 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 11:33:00.946890  618632 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0908 11:33:00.947075  618632 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0908 11:33:00.949199  618632 out.go:171] Using Docker driver with root privileges
	I0908 11:33:00.950724  618632 cni.go:84] Creating CNI manager for ""
	I0908 11:33:00.950818  618632 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 11:33:00.950837  618632 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0908 11:33:00.950937  618632 start.go:348] cluster config:
	{Name:download-only-829747 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-829747 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 11:33:00.952592  618632 out.go:99] Starting "download-only-829747" primary control-plane node in "download-only-829747" cluster
	I0908 11:33:00.952644  618632 cache.go:123] Beginning downloading kic base image for docker with crio
	I0908 11:33:00.953994  618632 out.go:99] Pulling base image v0.0.47-1756980985-21488 ...
	I0908 11:33:00.954052  618632 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0908 11:33:00.954215  618632 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 11:33:00.971807  618632 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 to local cache
	I0908 11:33:00.972094  618632 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory
	I0908 11:33:00.972239  618632 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 to local cache
	I0908 11:33:00.977648  618632 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I0908 11:33:00.977677  618632 cache.go:58] Caching tarball of preloaded images
	I0908 11:33:00.977898  618632 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0908 11:33:00.979758  618632 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0908 11:33:00.979784  618632 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 ...
	I0908 11:33:01.007174  618632 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21512-614854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I0908 11:33:04.760964  618632 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 ...
	I0908 11:33:04.761088  618632 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21512-614854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 ...
	I0908 11:33:05.063224  618632 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 as a tarball
	I0908 11:33:05.724212  618632 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I0908 11:33:05.724597  618632 profile.go:143] Saving config to /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/download-only-829747/config.json ...
	I0908 11:33:05.724634  618632 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/download-only-829747/config.json: {Name:mkcc22435042461c77d670f719240a4d2e872a51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 11:33:05.724805  618632 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0908 11:33:05.724956  618632 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21512-614854/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-829747 host does not exist
	  To start a cluster, run: "minikube start -p download-only-829747"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-829747
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (4.68s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-264760 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-264760 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.681336674s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (4.68s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0908 11:33:11.437076  618620 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
I0908 11:33:11.437138  618620 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21512-614854/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-264760
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-264760: exit status 85 (68.441102ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-829747 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-829747 │ jenkins │ v1.36.0 │ 08 Sep 25 11:33 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.36.0 │ 08 Sep 25 11:33 UTC │ 08 Sep 25 11:33 UTC │
	│ delete  │ -p download-only-829747                                                                                                                                                   │ download-only-829747 │ jenkins │ v1.36.0 │ 08 Sep 25 11:33 UTC │ 08 Sep 25 11:33 UTC │
	│ start   │ -o=json --download-only -p download-only-264760 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-264760 │ jenkins │ v1.36.0 │ 08 Sep 25 11:33 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 11:33:06
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 11:33:06.803317  618983 out.go:360] Setting OutFile to fd 1 ...
	I0908 11:33:06.803450  618983 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:33:06.803460  618983 out.go:374] Setting ErrFile to fd 2...
	I0908 11:33:06.803463  618983 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:33:06.803692  618983 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-614854/.minikube/bin
	I0908 11:33:06.804297  618983 out.go:368] Setting JSON to true
	I0908 11:33:06.805290  618983 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":8131,"bootTime":1757323056,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 11:33:06.805388  618983 start.go:140] virtualization: kvm guest
	I0908 11:33:06.807305  618983 out.go:99] [download-only-264760] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 11:33:06.807462  618983 notify.go:220] Checking for updates...
	I0908 11:33:06.808960  618983 out.go:171] MINIKUBE_LOCATION=21512
	I0908 11:33:06.810636  618983 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 11:33:06.812112  618983 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21512-614854/kubeconfig
	I0908 11:33:06.813409  618983 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-614854/.minikube
	I0908 11:33:06.814868  618983 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0908 11:33:06.817917  618983 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0908 11:33:06.818233  618983 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 11:33:06.841531  618983 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 11:33:06.841613  618983 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 11:33:06.892091  618983 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:49 SystemTime:2025-09-08 11:33:06.882257962 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 11:33:06.892213  618983 docker.go:318] overlay module found
	I0908 11:33:06.894141  618983 out.go:99] Using the docker driver based on user configuration
	I0908 11:33:06.894196  618983 start.go:304] selected driver: docker
	I0908 11:33:06.894210  618983 start.go:918] validating driver "docker" against <nil>
	I0908 11:33:06.894359  618983 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 11:33:06.945861  618983 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:49 SystemTime:2025-09-08 11:33:06.936880812 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 11:33:06.946086  618983 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 11:33:06.946625  618983 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0908 11:33:06.946849  618983 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0908 11:33:06.949102  618983 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-264760 host does not exist
	  To start a cluster, run: "minikube start -p download-only-264760"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-264760
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.19s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-220243 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-220243" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-220243
--- PASS: TestDownloadOnlyKic (1.19s)

                                                
                                    
x
+
TestBinaryMirror (0.84s)

                                                
                                                
=== RUN   TestBinaryMirror
I0908 11:33:13.360604  618620 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-922073 --alsologtostderr --binary-mirror http://127.0.0.1:41749 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-922073" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-922073
--- PASS: TestBinaryMirror (0.84s)

                                                
                                    
x
+
TestOffline (91.08s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-248556 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-248556 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m28.655162475s)
helpers_test.go:175: Cleaning up "offline-crio-248556" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-248556
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-248556: (2.420505445s)
--- PASS: TestOffline (91.08s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-960652
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-960652: exit status 85 (64.666212ms)

                                                
                                                
-- stdout --
	* Profile "addons-960652" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-960652"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-960652
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-960652: exit status 85 (70.683164ms)

                                                
                                                
-- stdout --
	* Profile "addons-960652" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-960652"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (161.39s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-960652 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-960652 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m41.393410156s)
--- PASS: TestAddons/Setup (161.39s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-960652 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-960652 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.48s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-960652 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-960652 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e0ee3ee9-1b01-4029-b08d-96a0eaea67e6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e0ee3ee9-1b01-4029-b08d-96a0eaea67e6] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003914515s
addons_test.go:694: (dbg) Run:  kubectl --context addons-960652 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-960652 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-960652 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.48s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.59s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 9.815112ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-zbdzw" [eb08f47e-6098-4caa-a39d-4250b569612a] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003280758s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-x79dt" [4e5f709b-3754-4bf9-a2ab-51c1f737b115] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003671395s
addons_test.go:392: (dbg) Run:  kubectl --context addons-960652 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-960652 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-960652 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.810979033s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-960652 ip
2025/09/08 11:36:29 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-960652 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.59s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.64s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.561888ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-960652
addons_test.go:332: (dbg) Run:  kubectl --context addons-960652 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-960652 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.64s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.3s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-jrm97" [ac37e465-9b51-487d-85a1-e717962b0b1f] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003928468s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-960652 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (5.30s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.82s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.600614ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-mc5nf" [564706d3-c633-40a6-9289-585f1e5a5c7d] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004020443s
addons_test.go:463: (dbg) Run:  kubectl --context addons-960652 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-960652 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.82s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (21.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-960652 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6f46646d79-2sbcr" [a00bfa46-a8c4-4682-a0f0-ab9d85cac16b] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6f46646d79-2sbcr" [a00bfa46-a8c4-4682-a0f0-ab9d85cac16b] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.003576304s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-960652 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-960652 addons disable headlamp --alsologtostderr -v=1: (5.801426669s)
--- PASS: TestAddons/parallel/Headlamp (21.66s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.5s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-24n9x" [66638bb3-d831-45a1-a1d1-e07d0426f9c3] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003921868s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-960652 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.50s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (12.16s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-960652 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-960652 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-960652 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-960652 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-960652 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-960652 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-960652 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [7b7383dd-2950-4d24-aa7d-a821e6c1cfe2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [7b7383dd-2950-4d24-aa7d-a821e6c1cfe2] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [7b7383dd-2950-4d24-aa7d-a821e6c1cfe2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 7.004080615s
addons_test.go:967: (dbg) Run:  kubectl --context addons-960652 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-960652 ssh "cat /opt/local-path-provisioner/pvc-a4a34cf8-0045-4c4f-ba4a-0035da17388c_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-960652 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-960652 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-960652 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (12.16s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.96s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-hckn5" [8a17fe7a-39e5-454c-bb05-d21c5cd2db0e] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.00443659s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-960652 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.96s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-dntfm" [94e14e71-c8af-425d-b00f-3f8d69dc9df3] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004065369s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-960652 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-960652 addons disable yakd --alsologtostderr -v=1: (5.76452553s)
--- PASS: TestAddons/parallel/Yakd (11.77s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.54s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-z9mjk" [056bccc6-d6e9-439f-b5d5-7d4e79ed80ff] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.003616056s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-960652 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (6.54s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.21s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-960652
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-960652: (11.941377933s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-960652
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-960652
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-960652
--- PASS: TestAddons/StoppedEnableDisable (12.21s)

                                                
                                    
x
+
TestCertOptions (28.02s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-799766 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-799766 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (25.473453195s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-799766 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-799766 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-799766 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-799766" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-799766
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-799766: (1.916343606s)
--- PASS: TestCertOptions (28.02s)

                                                
                                    
x
+
TestCertExpiration (227.15s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-310765 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-310765 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (29.447162976s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-310765 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-310765 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (15.264143828s)
helpers_test.go:175: Cleaning up "cert-expiration-310765" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-310765
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-310765: (2.436611597s)
--- PASS: TestCertExpiration (227.15s)

                                                
                                    
x
+
TestForceSystemdFlag (31.79s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-530334 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-530334 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (29.032995929s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-530334 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-530334" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-530334
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-530334: (2.481800731s)
--- PASS: TestForceSystemdFlag (31.79s)

                                                
                                    
x
+
TestForceSystemdEnv (44.67s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-338821 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-338821 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (41.773593302s)
helpers_test.go:175: Cleaning up "force-systemd-env-338821" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-338821
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-338821: (2.896292277s)
--- PASS: TestForceSystemdEnv (44.67s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.27s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0908 12:27:38.827289  618620 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0908 12:27:38.827439  618620 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0908 12:27:38.866630  618620 install.go:62] docker-machine-driver-kvm2: exit status 1
W0908 12:27:38.866815  618620 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0908 12:27:38.866874  618620 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1847856941/001/docker-machine-driver-kvm2
I0908 12:27:39.022523  618620 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1847856941/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440] Decompressors:map[bz2:0xc000695730 gz:0xc000695738 tar:0xc0006956e0 tar.bz2:0xc0006956f0 tar.gz:0xc000695700 tar.xz:0xc000695710 tar.zst:0xc000695720 tbz2:0xc0006956f0 tgz:0xc000695700 txz:0xc000695710 tzst:0xc000695720 xz:0xc000695740 zip:0xc000695750 zst:0xc000695748] Getters:map[file:0xc0023b89c0 http:0xc0023868c0 https:0xc002386910] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0908 12:27:39.022580  618620 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1847856941/001/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (1.27s)

                                                
                                    
x
+
TestErrorSpam/setup (23.96s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-046000 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-046000 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-046000 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-046000 --driver=docker  --container-runtime=crio: (23.959345953s)
--- PASS: TestErrorSpam/setup (23.96s)

                                                
                                    
x
+
TestErrorSpam/start (0.62s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-046000 --log_dir /tmp/nospam-046000 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-046000 --log_dir /tmp/nospam-046000 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-046000 --log_dir /tmp/nospam-046000 start --dry-run
--- PASS: TestErrorSpam/start (0.62s)

                                                
                                    
x
+
TestErrorSpam/status (0.93s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-046000 --log_dir /tmp/nospam-046000 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-046000 --log_dir /tmp/nospam-046000 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-046000 --log_dir /tmp/nospam-046000 status
--- PASS: TestErrorSpam/status (0.93s)

                                                
                                    
x
+
TestErrorSpam/pause (1.66s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-046000 --log_dir /tmp/nospam-046000 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-046000 --log_dir /tmp/nospam-046000 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-046000 --log_dir /tmp/nospam-046000 pause
--- PASS: TestErrorSpam/pause (1.66s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-046000 --log_dir /tmp/nospam-046000 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-046000 --log_dir /tmp/nospam-046000 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-046000 --log_dir /tmp/nospam-046000 unpause
--- PASS: TestErrorSpam/unpause (1.69s)

                                                
                                    
x
+
TestErrorSpam/stop (1.41s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-046000 --log_dir /tmp/nospam-046000 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-046000 --log_dir /tmp/nospam-046000 stop: (1.202216431s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-046000 --log_dir /tmp/nospam-046000 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-046000 --log_dir /tmp/nospam-046000 stop
--- PASS: TestErrorSpam/stop (1.41s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21512-614854/.minikube/files/etc/test/nested/copy/618620/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (70.25s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-982703 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-982703 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m10.252420503s)
--- PASS: TestFunctional/serial/StartWithProxy (70.25s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (28.65s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0908 11:45:02.699583  618620 config.go:182] Loaded profile config "functional-982703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-982703 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-982703 --alsologtostderr -v=8: (28.648367393s)
functional_test.go:678: soft start took 28.649378821s for "functional-982703" cluster.
I0908 11:45:31.348480  618620 config.go:182] Loaded profile config "functional-982703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (28.65s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-982703 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.02s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-982703 cache add registry.k8s.io/pause:3.3: (1.034330477s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-982703 cache add registry.k8s.io/pause:latest: (1.028185568s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.02s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-982703 /tmp/TestFunctionalserialCacheCmdcacheadd_local2879033525/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 cache add minikube-local-cache-test:functional-982703
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 cache delete minikube-local-cache-test:functional-982703
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-982703
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.84s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-982703 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (290.222418ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.84s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 kubectl -- --context functional-982703 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-982703 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.21s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-982703 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0908 11:45:56.270306  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:45:56.276775  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:45:56.288243  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:45:56.309779  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:45:56.351205  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:45:56.432798  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:45:56.594350  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:45:56.916067  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:45:57.558185  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:45:58.839842  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:46:01.402891  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:46:06.525290  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:46:16.767093  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-982703 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.208275822s)
functional_test.go:776: restart took 39.208444513s for "functional-982703" cluster.
I0908 11:46:17.316656  618620 config.go:182] Loaded profile config "functional-982703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (39.21s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-982703 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-982703 logs: (1.483023762s)
--- PASS: TestFunctional/serial/LogsCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.53s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 logs --file /tmp/TestFunctionalserialLogsFileCmd577549918/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-982703 logs --file /tmp/TestFunctionalserialLogsFileCmd577549918/001/logs.txt: (1.53390142s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.53s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.99s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-982703 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-982703
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-982703: exit status 115 (346.404092ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32350 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-982703 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.99s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-982703 config get cpus: exit status 14 (81.332338ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-982703 config get cpus: exit status 14 (78.199945ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-982703 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-982703 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (161.776956ms)

                                                
                                                
-- stdout --
	* [functional-982703] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21512-614854/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-614854/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 11:52:34.735411  663888 out.go:360] Setting OutFile to fd 1 ...
	I0908 11:52:34.735728  663888 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:52:34.735739  663888 out.go:374] Setting ErrFile to fd 2...
	I0908 11:52:34.735743  663888 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:52:34.736013  663888 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-614854/.minikube/bin
	I0908 11:52:34.736695  663888 out.go:368] Setting JSON to false
	I0908 11:52:34.737816  663888 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":9299,"bootTime":1757323056,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 11:52:34.737951  663888 start.go:140] virtualization: kvm guest
	I0908 11:52:34.740629  663888 out.go:179] * [functional-982703] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 11:52:34.742175  663888 out.go:179]   - MINIKUBE_LOCATION=21512
	I0908 11:52:34.742206  663888 notify.go:220] Checking for updates...
	I0908 11:52:34.745216  663888 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 11:52:34.746647  663888 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21512-614854/kubeconfig
	I0908 11:52:34.748017  663888 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-614854/.minikube
	I0908 11:52:34.749338  663888 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 11:52:34.750693  663888 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 11:52:34.752597  663888 config.go:182] Loaded profile config "functional-982703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 11:52:34.753092  663888 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 11:52:34.778132  663888 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 11:52:34.778238  663888 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 11:52:34.834740  663888 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-09-08 11:52:34.824481406 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 11:52:34.834851  663888 docker.go:318] overlay module found
	I0908 11:52:34.836643  663888 out.go:179] * Using the docker driver based on existing profile
	I0908 11:52:34.838074  663888 start.go:304] selected driver: docker
	I0908 11:52:34.838098  663888 start.go:918] validating driver "docker" against &{Name:functional-982703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-982703 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 11:52:34.838206  663888 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 11:52:34.840650  663888 out.go:203] 
	W0908 11:52:34.842356  663888 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0908 11:52:34.843785  663888 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-982703 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-982703 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-982703 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (154.767408ms)

                                                
                                                
-- stdout --
	* [functional-982703] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21512-614854/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-614854/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 11:52:35.101685  664079 out.go:360] Setting OutFile to fd 1 ...
	I0908 11:52:35.102075  664079 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:52:35.102106  664079 out.go:374] Setting ErrFile to fd 2...
	I0908 11:52:35.102114  664079 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:52:35.102913  664079 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-614854/.minikube/bin
	I0908 11:52:35.104207  664079 out.go:368] Setting JSON to false
	I0908 11:52:35.105302  664079 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":9299,"bootTime":1757323056,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 11:52:35.105427  664079 start.go:140] virtualization: kvm guest
	I0908 11:52:35.107319  664079 out.go:179] * [functional-982703] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	I0908 11:52:35.108875  664079 notify.go:220] Checking for updates...
	I0908 11:52:35.108926  664079 out.go:179]   - MINIKUBE_LOCATION=21512
	I0908 11:52:35.110462  664079 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 11:52:35.111752  664079 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21512-614854/kubeconfig
	I0908 11:52:35.112962  664079 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-614854/.minikube
	I0908 11:52:35.114299  664079 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 11:52:35.115722  664079 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 11:52:35.117669  664079 config.go:182] Loaded profile config "functional-982703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 11:52:35.118354  664079 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 11:52:35.142804  664079 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 11:52:35.142929  664079 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 11:52:35.194775  664079 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-09-08 11:52:35.185335554 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 11:52:35.194894  664079 docker.go:318] overlay module found
	I0908 11:52:35.196779  664079 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0908 11:52:35.198028  664079 start.go:304] selected driver: docker
	I0908 11:52:35.198047  664079 start.go:918] validating driver "docker" against &{Name:functional-982703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-982703 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 11:52:35.198174  664079 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 11:52:35.200642  664079 out.go:203] 
	W0908 11:52:35.202109  664079 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0908 11:52:35.203607  664079 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 ssh -n functional-982703 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 cp functional-982703:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2244528470/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 ssh -n functional-982703 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 ssh -n functional-982703 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/618620/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 ssh "sudo cat /etc/test/nested/copy/618620/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/618620.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 ssh "sudo cat /etc/ssl/certs/618620.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/618620.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 ssh "sudo cat /usr/share/ca-certificates/618620.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/6186202.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 ssh "sudo cat /etc/ssl/certs/6186202.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/6186202.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 ssh "sudo cat /usr/share/ca-certificates/6186202.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-982703 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-982703 ssh "sudo systemctl is-active docker": exit status 1 (320.662977ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-982703 ssh "sudo systemctl is-active containerd": exit status 1 (280.023202ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-982703 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-982703
localhost/kicbase/echo-server:functional-982703
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-982703 image ls --format short --alsologtostderr:
I0908 11:56:33.837756  669126 out.go:360] Setting OutFile to fd 1 ...
I0908 11:56:33.837860  669126 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:56:33.837869  669126 out.go:374] Setting ErrFile to fd 2...
I0908 11:56:33.837873  669126 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:56:33.838137  669126 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-614854/.minikube/bin
I0908 11:56:33.838718  669126 config.go:182] Loaded profile config "functional-982703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 11:56:33.838813  669126 config.go:182] Loaded profile config "functional-982703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 11:56:33.839278  669126 cli_runner.go:164] Run: docker container inspect functional-982703 --format={{.State.Status}}
I0908 11:56:33.860364  669126 ssh_runner.go:195] Run: systemctl --version
I0908 11:56:33.860422  669126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-982703
I0908 11:56:33.880184  669126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/functional-982703/id_rsa Username:docker}
I0908 11:56:33.969513  669126 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-982703 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ registry.k8s.io/kube-scheduler          │ v1.34.0            │ 46169d968e920 │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/kicbase/echo-server           │ functional-982703  │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.0            │ 90550c43ad2bc │ 89.1MB │
│ registry.k8s.io/kube-proxy              │ v1.34.0            │ df0860106674d │ 73.1MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ localhost/minikube-local-cache-test     │ functional-982703  │ f7fb95df284b1 │ 3.33kB │
│ registry.k8s.io/kube-controller-manager │ v1.34.0            │ a0af72f2ec6d6 │ 76MB   │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-982703 image ls --format table --alsologtostderr:
I0908 11:56:34.803199  669573 out.go:360] Setting OutFile to fd 1 ...
I0908 11:56:34.803466  669573 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:56:34.803477  669573 out.go:374] Setting ErrFile to fd 2...
I0908 11:56:34.803481  669573 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:56:34.803766  669573 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-614854/.minikube/bin
I0908 11:56:34.804414  669573 config.go:182] Loaded profile config "functional-982703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 11:56:34.804520  669573 config.go:182] Loaded profile config "functional-982703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 11:56:34.804922  669573 cli_runner.go:164] Run: docker container inspect functional-982703 --format={{.State.Status}}
I0908 11:56:34.825052  669573 ssh_runner.go:195] Run: systemctl --version
I0908 11:56:34.825125  669573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-982703
I0908 11:56:34.845784  669573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/functional-982703/id_rsa Username:docker}
I0908 11:56:34.932715  669573 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-982703 image ls --format json --alsologtostderr:
[{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-982703"],"size":"4943877"},{"id":"f7fb95df284b14237a19698a7436888c5f2a7868d5752998bca5545c5f9dcd33","repoDigests":["localhost/minikube-local-cache-test@sha256:c8a313e7c4ec84affd486728a7e3cdbf950f52162ecd318a6912fed914dd366a"],"repoTags":["localhost/minikube-local-cache-test:functional-982703"],"size":"3330"},{"id":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","repoDigests":["re
gistry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86","registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"89050097"},{"id":"df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce","repoDigests":["registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067","registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"73138071"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:1
8eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200
fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6","registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"76004183"},{"id":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6f
cf621f659bdc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140","registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"53844823"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-982703 image ls --format json --alsologtostderr:
I0908 11:56:34.572325  669475 out.go:360] Setting OutFile to fd 1 ...
I0908 11:56:34.572683  669475 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:56:34.572698  669475 out.go:374] Setting ErrFile to fd 2...
I0908 11:56:34.572704  669475 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:56:34.572976  669475 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-614854/.minikube/bin
I0908 11:56:34.573683  669475 config.go:182] Loaded profile config "functional-982703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 11:56:34.573781  669475 config.go:182] Loaded profile config "functional-982703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 11:56:34.574149  669475 cli_runner.go:164] Run: docker container inspect functional-982703 --format={{.State.Status}}
I0908 11:56:34.596254  669475 ssh_runner.go:195] Run: systemctl --version
I0908 11:56:34.596320  669475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-982703
I0908 11:56:34.617122  669475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/functional-982703/id_rsa Username:docker}
I0908 11:56:34.704475  669475 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-982703 image ls --format yaml --alsologtostderr:
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-982703
size: "4943877"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: f7fb95df284b14237a19698a7436888c5f2a7868d5752998bca5545c5f9dcd33
repoDigests:
- localhost/minikube-local-cache-test@sha256:c8a313e7c4ec84affd486728a7e3cdbf950f52162ecd318a6912fed914dd366a
repoTags:
- localhost/minikube-local-cache-test:functional-982703
size: "3330"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: 90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86
- registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "89050097"
- id: a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6
- registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "76004183"
- id: df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce
repoDigests:
- registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067
- registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "73138071"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140
- registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "53844823"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-982703 image ls --format yaml --alsologtostderr:
I0908 11:56:34.075312  669223 out.go:360] Setting OutFile to fd 1 ...
I0908 11:56:34.075585  669223 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:56:34.075594  669223 out.go:374] Setting ErrFile to fd 2...
I0908 11:56:34.075598  669223 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:56:34.075828  669223 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-614854/.minikube/bin
I0908 11:56:34.078008  669223 config.go:182] Loaded profile config "functional-982703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 11:56:34.078153  669223 config.go:182] Loaded profile config "functional-982703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 11:56:34.078559  669223 cli_runner.go:164] Run: docker container inspect functional-982703 --format={{.State.Status}}
I0908 11:56:34.101736  669223 ssh_runner.go:195] Run: systemctl --version
I0908 11:56:34.101799  669223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-982703
I0908 11:56:34.121230  669223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/functional-982703/id_rsa Username:docker}
I0908 11:56:34.204826  669223 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-982703 ssh pgrep buildkitd: exit status 1 (280.896947ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 image build -t localhost/my-image:functional-982703 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-982703 image build -t localhost/my-image:functional-982703 testdata/build --alsologtostderr: (2.232226782s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-982703 image build -t localhost/my-image:functional-982703 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 0a9a99405e7
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-982703
--> 2d7b2802fcc
Successfully tagged localhost/my-image:functional-982703
2d7b2802fcc8fe1af24989d7f1a5fcaf63b63ca55c212a4937eb011a3471ae34
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-982703 image build -t localhost/my-image:functional-982703 testdata/build --alsologtostderr:
I0908 11:56:34.595980  669486 out.go:360] Setting OutFile to fd 1 ...
I0908 11:56:34.596262  669486 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:56:34.596270  669486 out.go:374] Setting ErrFile to fd 2...
I0908 11:56:34.596274  669486 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:56:34.596577  669486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-614854/.minikube/bin
I0908 11:56:34.597262  669486 config.go:182] Loaded profile config "functional-982703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 11:56:34.598149  669486 config.go:182] Loaded profile config "functional-982703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 11:56:34.598809  669486 cli_runner.go:164] Run: docker container inspect functional-982703 --format={{.State.Status}}
I0908 11:56:34.618330  669486 ssh_runner.go:195] Run: systemctl --version
I0908 11:56:34.618385  669486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-982703
I0908 11:56:34.636852  669486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/functional-982703/id_rsa Username:docker}
I0908 11:56:34.724666  669486 build_images.go:161] Building image from path: /tmp/build.506859478.tar
I0908 11:56:34.724750  669486 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0908 11:56:34.735222  669486 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.506859478.tar
I0908 11:56:34.739395  669486 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.506859478.tar: stat -c "%s %y" /var/lib/minikube/build/build.506859478.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.506859478.tar': No such file or directory
I0908 11:56:34.739427  669486 ssh_runner.go:362] scp /tmp/build.506859478.tar --> /var/lib/minikube/build/build.506859478.tar (3072 bytes)
I0908 11:56:34.768119  669486 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.506859478
I0908 11:56:34.777926  669486 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.506859478 -xf /var/lib/minikube/build/build.506859478.tar
I0908 11:56:34.788697  669486 crio.go:315] Building image: /var/lib/minikube/build/build.506859478
I0908 11:56:34.788766  669486 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-982703 /var/lib/minikube/build/build.506859478 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0908 11:56:36.743975  669486 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-982703 /var/lib/minikube/build/build.506859478 --cgroup-manager=cgroupfs: (1.955175451s)
I0908 11:56:36.744051  669486 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.506859478
I0908 11:56:36.753649  669486 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.506859478.tar
I0908 11:56:36.762779  669486 build_images.go:217] Built localhost/my-image:functional-982703 from /tmp/build.506859478.tar
I0908 11:56:36.762816  669486 build_images.go:133] succeeded building to: functional-982703
I0908 11:56:36.762823  669486 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-982703
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 image load --daemon kicbase/echo-server:functional-982703 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-982703 image load --daemon kicbase/echo-server:functional-982703 --alsologtostderr: (1.255653804s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 image load --daemon kicbase/echo-server:functional-982703 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-982703 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-982703 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-982703 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-982703 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 658891: os: process already finished
helpers_test.go:519: unable to terminate pid 658698: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-982703
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 image load --daemon kicbase/echo-server:functional-982703 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-982703 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 image save kicbase/echo-server:functional-982703 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 image rm kicbase/echo-server:functional-982703 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-982703
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 image save --daemon kicbase/echo-server:functional-982703 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-982703
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-982703 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "324.261968ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "53.584258ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "315.210802ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "58.141454ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (49.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-982703 /tmp/TestFunctionalparallelMountCmdany-port3842913867/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1757332311617822979" to /tmp/TestFunctionalparallelMountCmdany-port3842913867/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1757332311617822979" to /tmp/TestFunctionalparallelMountCmdany-port3842913867/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1757332311617822979" to /tmp/TestFunctionalparallelMountCmdany-port3842913867/001/test-1757332311617822979
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-982703 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (277.588482ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0908 11:51:51.895744  618620 retry.go:31] will retry after 695.458586ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep  8 11:51 created-by-test
-rw-r--r-- 1 docker docker 24 Sep  8 11:51 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep  8 11:51 test-1757332311617822979
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 ssh cat /mount-9p/test-1757332311617822979
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-982703 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [5bbe0cbc-2249-41a2-a9db-a0e647c4b3ad] Pending
helpers_test.go:352: "busybox-mount" [5bbe0cbc-2249-41a2-a9db-a0e647c4b3ad] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [5bbe0cbc-2249-41a2-a9db-a0e647c4b3ad] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [5bbe0cbc-2249-41a2-a9db-a0e647c4b3ad] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 47.004083155s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-982703 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-982703 /tmp/TestFunctionalparallelMountCmdany-port3842913867/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (49.81s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-982703 /tmp/TestFunctionalparallelMountCmdspecific-port1884405908/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-982703 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (261.99975ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0908 11:52:41.687887  618620 retry.go:31] will retry after 481.376827ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-982703 /tmp/TestFunctionalparallelMountCmdspecific-port1884405908/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-982703 ssh "sudo umount -f /mount-9p": exit status 1 (261.585412ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-982703 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-982703 /tmp/TestFunctionalparallelMountCmdspecific-port1884405908/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-982703 /tmp/TestFunctionalparallelMountCmdVerifyCleanup366583128/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-982703 /tmp/TestFunctionalparallelMountCmdVerifyCleanup366583128/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-982703 /tmp/TestFunctionalparallelMountCmdVerifyCleanup366583128/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-982703 ssh "findmnt -T" /mount1: exit status 1 (326.380246ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0908 11:52:43.482489  618620 retry.go:31] will retry after 363.252728ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-982703 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-982703 /tmp/TestFunctionalparallelMountCmdVerifyCleanup366583128/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-982703 /tmp/TestFunctionalparallelMountCmdVerifyCleanup366583128/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-982703 /tmp/TestFunctionalparallelMountCmdVerifyCleanup366583128/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-982703 service list: (1.690147388s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-982703 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-982703 service list -o json: (1.694095316s)
functional_test.go:1504: Took "1.694214928s" to run "out/minikube-linux-amd64 -p functional-982703 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.69s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-982703
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-982703
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-982703
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (177.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-819472 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m57.020186021s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (177.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-819472 kubectl -- rollout status deployment/busybox: (3.795914721s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 kubectl -- exec busybox-7b57f96db7-8q6xl -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 kubectl -- exec busybox-7b57f96db7-b6mgc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 kubectl -- exec busybox-7b57f96db7-wb5fx -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 kubectl -- exec busybox-7b57f96db7-8q6xl -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 kubectl -- exec busybox-7b57f96db7-b6mgc -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 kubectl -- exec busybox-7b57f96db7-wb5fx -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 kubectl -- exec busybox-7b57f96db7-8q6xl -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 kubectl -- exec busybox-7b57f96db7-b6mgc -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 kubectl -- exec busybox-7b57f96db7-wb5fx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 kubectl -- exec busybox-7b57f96db7-8q6xl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 kubectl -- exec busybox-7b57f96db7-8q6xl -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 kubectl -- exec busybox-7b57f96db7-b6mgc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 kubectl -- exec busybox-7b57f96db7-b6mgc -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 kubectl -- exec busybox-7b57f96db7-wb5fx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 kubectl -- exec busybox-7b57f96db7-wb5fx -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (57.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 node add --alsologtostderr -v 5
E0908 12:05:56.269854  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:06:24.584924  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/functional-982703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:06:24.591477  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/functional-982703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:06:24.602895  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/functional-982703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:06:24.624379  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/functional-982703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:06:24.665832  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/functional-982703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:06:24.747313  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/functional-982703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:06:24.908929  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/functional-982703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:06:25.230666  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/functional-982703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:06:25.872786  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/functional-982703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:06:27.154613  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/functional-982703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:06:29.716877  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/functional-982703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:06:34.838801  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/functional-982703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:06:45.080882  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/functional-982703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-819472 node add --alsologtostderr -v 5: (56.864838919s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (57.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-819472 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 cp testdata/cp-test.txt ha-819472:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 ssh -n ha-819472 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 cp ha-819472:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2491494216/001/cp-test_ha-819472.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 ssh -n ha-819472 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 cp ha-819472:/home/docker/cp-test.txt ha-819472-m02:/home/docker/cp-test_ha-819472_ha-819472-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 ssh -n ha-819472 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 ssh -n ha-819472-m02 "sudo cat /home/docker/cp-test_ha-819472_ha-819472-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 cp ha-819472:/home/docker/cp-test.txt ha-819472-m03:/home/docker/cp-test_ha-819472_ha-819472-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 ssh -n ha-819472 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 ssh -n ha-819472-m03 "sudo cat /home/docker/cp-test_ha-819472_ha-819472-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 cp ha-819472:/home/docker/cp-test.txt ha-819472-m04:/home/docker/cp-test_ha-819472_ha-819472-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 ssh -n ha-819472 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 ssh -n ha-819472-m04 "sudo cat /home/docker/cp-test_ha-819472_ha-819472-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 cp testdata/cp-test.txt ha-819472-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 ssh -n ha-819472-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 cp ha-819472-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2491494216/001/cp-test_ha-819472-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 ssh -n ha-819472-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 cp ha-819472-m02:/home/docker/cp-test.txt ha-819472:/home/docker/cp-test_ha-819472-m02_ha-819472.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 ssh -n ha-819472-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 ssh -n ha-819472 "sudo cat /home/docker/cp-test_ha-819472-m02_ha-819472.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 cp ha-819472-m02:/home/docker/cp-test.txt ha-819472-m03:/home/docker/cp-test_ha-819472-m02_ha-819472-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 ssh -n ha-819472-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 ssh -n ha-819472-m03 "sudo cat /home/docker/cp-test_ha-819472-m02_ha-819472-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 cp ha-819472-m02:/home/docker/cp-test.txt ha-819472-m04:/home/docker/cp-test_ha-819472-m02_ha-819472-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 ssh -n ha-819472-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 ssh -n ha-819472-m04 "sudo cat /home/docker/cp-test_ha-819472-m02_ha-819472-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 cp testdata/cp-test.txt ha-819472-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 ssh -n ha-819472-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 cp ha-819472-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2491494216/001/cp-test_ha-819472-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 ssh -n ha-819472-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 cp ha-819472-m03:/home/docker/cp-test.txt ha-819472:/home/docker/cp-test_ha-819472-m03_ha-819472.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 ssh -n ha-819472-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 ssh -n ha-819472 "sudo cat /home/docker/cp-test_ha-819472-m03_ha-819472.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 cp ha-819472-m03:/home/docker/cp-test.txt ha-819472-m02:/home/docker/cp-test_ha-819472-m03_ha-819472-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 ssh -n ha-819472-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 ssh -n ha-819472-m02 "sudo cat /home/docker/cp-test_ha-819472-m03_ha-819472-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 cp ha-819472-m03:/home/docker/cp-test.txt ha-819472-m04:/home/docker/cp-test_ha-819472-m03_ha-819472-m04.txt
E0908 12:07:05.562871  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/functional-982703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 ssh -n ha-819472-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 ssh -n ha-819472-m04 "sudo cat /home/docker/cp-test_ha-819472-m03_ha-819472-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 cp testdata/cp-test.txt ha-819472-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 ssh -n ha-819472-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 cp ha-819472-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2491494216/001/cp-test_ha-819472-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 ssh -n ha-819472-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 cp ha-819472-m04:/home/docker/cp-test.txt ha-819472:/home/docker/cp-test_ha-819472-m04_ha-819472.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 ssh -n ha-819472-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 ssh -n ha-819472 "sudo cat /home/docker/cp-test_ha-819472-m04_ha-819472.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 cp ha-819472-m04:/home/docker/cp-test.txt ha-819472-m02:/home/docker/cp-test_ha-819472-m04_ha-819472-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 ssh -n ha-819472-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 ssh -n ha-819472-m02 "sudo cat /home/docker/cp-test_ha-819472-m04_ha-819472-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 cp ha-819472-m04:/home/docker/cp-test.txt ha-819472-m03:/home/docker/cp-test_ha-819472-m04_ha-819472-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 ssh -n ha-819472-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 ssh -n ha-819472-m03 "sudo cat /home/docker/cp-test_ha-819472-m04_ha-819472-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-819472 node stop m02 --alsologtostderr -v 5: (11.913479729s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-819472 status --alsologtostderr -v 5: exit status 7 (685.514849ms)

                                                
                                                
-- stdout --
	ha-819472
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-819472-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-819472-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-819472-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 12:07:22.266949  693243 out.go:360] Setting OutFile to fd 1 ...
	I0908 12:07:22.267231  693243 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:07:22.267240  693243 out.go:374] Setting ErrFile to fd 2...
	I0908 12:07:22.267245  693243 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:07:22.267439  693243 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-614854/.minikube/bin
	I0908 12:07:22.267624  693243 out.go:368] Setting JSON to false
	I0908 12:07:22.267694  693243 mustload.go:65] Loading cluster: ha-819472
	I0908 12:07:22.267833  693243 notify.go:220] Checking for updates...
	I0908 12:07:22.268136  693243 config.go:182] Loaded profile config "ha-819472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 12:07:22.268162  693243 status.go:174] checking status of ha-819472 ...
	I0908 12:07:22.268622  693243 cli_runner.go:164] Run: docker container inspect ha-819472 --format={{.State.Status}}
	I0908 12:07:22.289532  693243 status.go:371] ha-819472 host status = "Running" (err=<nil>)
	I0908 12:07:22.289584  693243 host.go:66] Checking if "ha-819472" exists ...
	I0908 12:07:22.289929  693243 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-819472
	I0908 12:07:22.311814  693243 host.go:66] Checking if "ha-819472" exists ...
	I0908 12:07:22.312148  693243 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 12:07:22.312195  693243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-819472
	I0908 12:07:22.332175  693243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/ha-819472/id_rsa Username:docker}
	I0908 12:07:22.417758  693243 ssh_runner.go:195] Run: systemctl --version
	I0908 12:07:22.422710  693243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 12:07:22.434753  693243 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:07:22.487427  693243 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:true NGoroutines:73 SystemTime:2025-09-08 12:07:22.47742885 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 12:07:22.488060  693243 kubeconfig.go:125] found "ha-819472" server: "https://192.168.49.254:8443"
	I0908 12:07:22.488103  693243 api_server.go:166] Checking apiserver status ...
	I0908 12:07:22.488142  693243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 12:07:22.500491  693243 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1571/cgroup
	I0908 12:07:22.510507  693243 api_server.go:182] apiserver freezer: "6:freezer:/docker/91d91586ae72c4376d04341a5c3c15db555da888312a7a8d879e4cf60785ed6f/crio/crio-71a55765b9943fba144bba6cc7545513d3538076b4fc437d9cde14f9eeb78ab9"
	I0908 12:07:22.510584  693243 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/91d91586ae72c4376d04341a5c3c15db555da888312a7a8d879e4cf60785ed6f/crio/crio-71a55765b9943fba144bba6cc7545513d3538076b4fc437d9cde14f9eeb78ab9/freezer.state
	I0908 12:07:22.520035  693243 api_server.go:204] freezer state: "THAWED"
	I0908 12:07:22.520073  693243 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0908 12:07:22.526568  693243 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0908 12:07:22.526607  693243 status.go:463] ha-819472 apiserver status = Running (err=<nil>)
	I0908 12:07:22.526618  693243 status.go:176] ha-819472 status: &{Name:ha-819472 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 12:07:22.526636  693243 status.go:174] checking status of ha-819472-m02 ...
	I0908 12:07:22.526897  693243 cli_runner.go:164] Run: docker container inspect ha-819472-m02 --format={{.State.Status}}
	I0908 12:07:22.546397  693243 status.go:371] ha-819472-m02 host status = "Stopped" (err=<nil>)
	I0908 12:07:22.546420  693243 status.go:384] host is not running, skipping remaining checks
	I0908 12:07:22.546427  693243 status.go:176] ha-819472-m02 status: &{Name:ha-819472-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 12:07:22.546451  693243 status.go:174] checking status of ha-819472-m03 ...
	I0908 12:07:22.546722  693243 cli_runner.go:164] Run: docker container inspect ha-819472-m03 --format={{.State.Status}}
	I0908 12:07:22.565911  693243 status.go:371] ha-819472-m03 host status = "Running" (err=<nil>)
	I0908 12:07:22.565946  693243 host.go:66] Checking if "ha-819472-m03" exists ...
	I0908 12:07:22.566276  693243 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-819472-m03
	I0908 12:07:22.584588  693243 host.go:66] Checking if "ha-819472-m03" exists ...
	I0908 12:07:22.584886  693243 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 12:07:22.584925  693243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-819472-m03
	I0908 12:07:22.603392  693243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/ha-819472-m03/id_rsa Username:docker}
	I0908 12:07:22.689379  693243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 12:07:22.701934  693243 kubeconfig.go:125] found "ha-819472" server: "https://192.168.49.254:8443"
	I0908 12:07:22.701965  693243 api_server.go:166] Checking apiserver status ...
	I0908 12:07:22.701999  693243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 12:07:22.712753  693243 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1424/cgroup
	I0908 12:07:22.723846  693243 api_server.go:182] apiserver freezer: "6:freezer:/docker/c0368982f33cdbdd53c10a97e817670514963be25e6c639702615b4780485efe/crio/crio-50c77312a66c1b692ab81df572e74dc54120806bb15d115c31b9793f9ead8f4b"
	I0908 12:07:22.723953  693243 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c0368982f33cdbdd53c10a97e817670514963be25e6c639702615b4780485efe/crio/crio-50c77312a66c1b692ab81df572e74dc54120806bb15d115c31b9793f9ead8f4b/freezer.state
	I0908 12:07:22.733276  693243 api_server.go:204] freezer state: "THAWED"
	I0908 12:07:22.733304  693243 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0908 12:07:22.737588  693243 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0908 12:07:22.737620  693243 status.go:463] ha-819472-m03 apiserver status = Running (err=<nil>)
	I0908 12:07:22.737631  693243 status.go:176] ha-819472-m03 status: &{Name:ha-819472-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 12:07:22.737652  693243 status.go:174] checking status of ha-819472-m04 ...
	I0908 12:07:22.737944  693243 cli_runner.go:164] Run: docker container inspect ha-819472-m04 --format={{.State.Status}}
	I0908 12:07:22.756352  693243 status.go:371] ha-819472-m04 host status = "Running" (err=<nil>)
	I0908 12:07:22.756384  693243 host.go:66] Checking if "ha-819472-m04" exists ...
	I0908 12:07:22.756660  693243 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-819472-m04
	I0908 12:07:22.776219  693243 host.go:66] Checking if "ha-819472-m04" exists ...
	I0908 12:07:22.776512  693243 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 12:07:22.776558  693243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-819472-m04
	I0908 12:07:22.796754  693243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/ha-819472-m04/id_rsa Username:docker}
	I0908 12:07:22.885356  693243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 12:07:22.897841  693243 status.go:176] ha-819472-m04 status: &{Name:ha-819472-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (33.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 node start m02 --alsologtostderr -v 5
E0908 12:07:46.524593  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/functional-982703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-819472 node start m02 --alsologtostderr -v 5: (32.282880266s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (33.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.190366568s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (130.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-819472 stop --alsologtostderr -v 5: (36.913976023s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 start --wait true --alsologtostderr -v 5
E0908 12:09:08.447929  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/functional-982703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-819472 start --wait true --alsologtostderr -v 5: (1m33.545817026s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (130.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-819472 node delete m03 --alsologtostderr -v 5: (10.726349599s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 stop --alsologtostderr -v 5
E0908 12:10:56.269728  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-819472 stop --alsologtostderr -v 5: (35.644901897s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-819472 status --alsologtostderr -v 5: exit status 7 (113.720636ms)

                                                
                                                
-- stdout --
	ha-819472
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-819472-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-819472-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 12:10:56.663631  710506 out.go:360] Setting OutFile to fd 1 ...
	I0908 12:10:56.663838  710506 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:10:56.663850  710506 out.go:374] Setting ErrFile to fd 2...
	I0908 12:10:56.663855  710506 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:10:56.664083  710506 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-614854/.minikube/bin
	I0908 12:10:56.664324  710506 out.go:368] Setting JSON to false
	I0908 12:10:56.664361  710506 mustload.go:65] Loading cluster: ha-819472
	I0908 12:10:56.664570  710506 notify.go:220] Checking for updates...
	I0908 12:10:56.664872  710506 config.go:182] Loaded profile config "ha-819472": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 12:10:56.664900  710506 status.go:174] checking status of ha-819472 ...
	I0908 12:10:56.665432  710506 cli_runner.go:164] Run: docker container inspect ha-819472 --format={{.State.Status}}
	I0908 12:10:56.685951  710506 status.go:371] ha-819472 host status = "Stopped" (err=<nil>)
	I0908 12:10:56.685993  710506 status.go:384] host is not running, skipping remaining checks
	I0908 12:10:56.686003  710506 status.go:176] ha-819472 status: &{Name:ha-819472 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 12:10:56.686038  710506 status.go:174] checking status of ha-819472-m02 ...
	I0908 12:10:56.686397  710506 cli_runner.go:164] Run: docker container inspect ha-819472-m02 --format={{.State.Status}}
	I0908 12:10:56.704342  710506 status.go:371] ha-819472-m02 host status = "Stopped" (err=<nil>)
	I0908 12:10:56.704369  710506 status.go:384] host is not running, skipping remaining checks
	I0908 12:10:56.704393  710506 status.go:176] ha-819472-m02 status: &{Name:ha-819472-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 12:10:56.704415  710506 status.go:174] checking status of ha-819472-m04 ...
	I0908 12:10:56.704668  710506 cli_runner.go:164] Run: docker container inspect ha-819472-m04 --format={{.State.Status}}
	I0908 12:10:56.722729  710506 status.go:371] ha-819472-m04 host status = "Stopped" (err=<nil>)
	I0908 12:10:56.722766  710506 status.go:384] host is not running, skipping remaining checks
	I0908 12:10:56.722775  710506 status.go:176] ha-819472-m04 status: &{Name:ha-819472-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (54.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0908 12:11:24.591876  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/functional-982703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-819472 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (53.641688643s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (54.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (42.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 node add --control-plane --alsologtostderr -v 5
E0908 12:11:52.290844  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/functional-982703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-819472 node add --control-plane --alsologtostderr -v 5: (41.500076321s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-819472 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (42.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.91s)

                                                
                                    
x
+
TestJSONOutput/start/Command (74.24s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-145075 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-145075 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m14.243835851s)
--- PASS: TestJSONOutput/start/Command (74.24s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-145075 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-145075 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.81s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-145075 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-145075 --output=json --user=testUser: (5.806726772s)
--- PASS: TestJSONOutput/stop/Command (5.81s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-233094 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-233094 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (75.765218ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"175912a4-4bcf-43b5-bc37-90c108576a62","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-233094] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"25ab08c2-12a4-47e2-9e97-6767692cb151","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21512"}}
	{"specversion":"1.0","id":"8ef8b800-501c-4196-845d-206a44820644","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d637909f-1215-40be-b5c1-90c3e89df3aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21512-614854/kubeconfig"}}
	{"specversion":"1.0","id":"e2620960-987c-46e6-afc5-eb99e5fe132e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-614854/.minikube"}}
	{"specversion":"1.0","id":"0ace5428-627a-45ba-bafc-a88d1010563d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"0d30ed49-e00c-469e-83d7-6fc967b3e0f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"787aa8e0-43c1-489e-9d01-ef0bc27d1cf3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-233094" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-233094
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (29.5s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-789639 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-789639 --network=: (27.394662316s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-789639" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-789639
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-789639: (2.089486691s)
--- PASS: TestKicCustomNetwork/create_custom_network (29.50s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (27.27s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-765939 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-765939 --network=bridge: (25.243733442s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-765939" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-765939
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-765939: (2.0048834s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (27.27s)

                                                
                                    
x
+
TestKicExistingNetwork (23.55s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0908 12:15:05.894167  618620 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0908 12:15:05.911974  618620 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0908 12:15:05.912076  618620 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0908 12:15:05.912101  618620 cli_runner.go:164] Run: docker network inspect existing-network
W0908 12:15:05.928872  618620 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0908 12:15:05.928909  618620 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0908 12:15:05.928933  618620 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0908 12:15:05.929102  618620 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0908 12:15:05.947362  618620 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a42c506aba4a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8e:58:c9:24:f1:2c} reservation:<nil>}
I0908 12:15:05.947987  618620 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00147ee30}
I0908 12:15:05.948028  618620 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0908 12:15:05.948083  618620 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0908 12:15:06.000791  618620 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-821558 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-821558 --network=existing-network: (21.419306948s)
helpers_test.go:175: Cleaning up "existing-network-821558" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-821558
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-821558: (1.988689068s)
I0908 12:15:29.427367  618620 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (23.55s)

                                                
                                    
x
+
TestKicCustomSubnet (29.73s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-758600 --subnet=192.168.60.0/24
E0908 12:15:56.269399  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-758600 --subnet=192.168.60.0/24: (27.619037181s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-758600 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-758600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-758600
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-758600: (2.092560193s)
--- PASS: TestKicCustomSubnet (29.73s)

                                                
                                    
x
+
TestKicStaticIP (24.95s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-806096 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-806096 --static-ip=192.168.200.200: (22.671186049s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-806096 ip
helpers_test.go:175: Cleaning up "static-ip-806096" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-806096
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-806096: (2.134776984s)
--- PASS: TestKicStaticIP (24.95s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (54.67s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-127696 --driver=docker  --container-runtime=crio
E0908 12:16:24.584914  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/functional-982703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-127696 --driver=docker  --container-runtime=crio: (23.67963768s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-154394 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-154394 --driver=docker  --container-runtime=crio: (25.614854687s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-127696
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-154394
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-154394" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-154394
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-154394: (1.892271219s)
helpers_test.go:175: Cleaning up "first-127696" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-127696
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-127696: (2.279971228s)
--- PASS: TestMinikubeProfile (54.67s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.24s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-412775 --memory=3072 --mount-string /tmp/TestMountStartserial95166564/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-412775 --memory=3072 --mount-string /tmp/TestMountStartserial95166564/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.234926906s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.24s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-412775 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.59s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-431960 --memory=3072 --mount-string /tmp/TestMountStartserial95166564/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-431960 --memory=3072 --mount-string /tmp/TestMountStartserial95166564/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.592892284s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-431960 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-412775 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-412775 --alsologtostderr -v=5: (1.648179262s)
--- PASS: TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-431960 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-431960
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-431960: (1.192065109s)
--- PASS: TestMountStart/serial/Stop (1.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.4s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-431960
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-431960: (6.403671986s)
--- PASS: TestMountStart/serial/RestartStopped (7.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-431960 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (96.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-296251 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E0908 12:18:59.336918  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-296251 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m36.091727691s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296251 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (96.55s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-296251 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-296251 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-296251 -- rollout status deployment/busybox: (3.383658721s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-296251 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-296251 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-296251 -- exec busybox-7b57f96db7-7tksv -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-296251 -- exec busybox-7b57f96db7-vnxmb -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-296251 -- exec busybox-7b57f96db7-7tksv -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-296251 -- exec busybox-7b57f96db7-vnxmb -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-296251 -- exec busybox-7b57f96db7-7tksv -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-296251 -- exec busybox-7b57f96db7-vnxmb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.87s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-296251 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-296251 -- exec busybox-7b57f96db7-7tksv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-296251 -- exec busybox-7b57f96db7-7tksv -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-296251 -- exec busybox-7b57f96db7-vnxmb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-296251 -- exec busybox-7b57f96db7-vnxmb -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (54.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-296251 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-296251 -v=5 --alsologtostderr: (53.948205396s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296251 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (54.58s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-296251 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.64s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296251 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296251 cp testdata/cp-test.txt multinode-296251:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296251 ssh -n multinode-296251 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296251 cp multinode-296251:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile163239813/001/cp-test_multinode-296251.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296251 ssh -n multinode-296251 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296251 cp multinode-296251:/home/docker/cp-test.txt multinode-296251-m02:/home/docker/cp-test_multinode-296251_multinode-296251-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296251 ssh -n multinode-296251 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296251 ssh -n multinode-296251-m02 "sudo cat /home/docker/cp-test_multinode-296251_multinode-296251-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296251 cp multinode-296251:/home/docker/cp-test.txt multinode-296251-m03:/home/docker/cp-test_multinode-296251_multinode-296251-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296251 ssh -n multinode-296251 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296251 ssh -n multinode-296251-m03 "sudo cat /home/docker/cp-test_multinode-296251_multinode-296251-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296251 cp testdata/cp-test.txt multinode-296251-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296251 ssh -n multinode-296251-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296251 cp multinode-296251-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile163239813/001/cp-test_multinode-296251-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296251 ssh -n multinode-296251-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296251 cp multinode-296251-m02:/home/docker/cp-test.txt multinode-296251:/home/docker/cp-test_multinode-296251-m02_multinode-296251.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296251 ssh -n multinode-296251-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296251 ssh -n multinode-296251 "sudo cat /home/docker/cp-test_multinode-296251-m02_multinode-296251.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296251 cp multinode-296251-m02:/home/docker/cp-test.txt multinode-296251-m03:/home/docker/cp-test_multinode-296251-m02_multinode-296251-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296251 ssh -n multinode-296251-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296251 ssh -n multinode-296251-m03 "sudo cat /home/docker/cp-test_multinode-296251-m02_multinode-296251-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296251 cp testdata/cp-test.txt multinode-296251-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296251 ssh -n multinode-296251-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296251 cp multinode-296251-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile163239813/001/cp-test_multinode-296251-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296251 ssh -n multinode-296251-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296251 cp multinode-296251-m03:/home/docker/cp-test.txt multinode-296251:/home/docker/cp-test_multinode-296251-m03_multinode-296251.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296251 ssh -n multinode-296251-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296251 ssh -n multinode-296251 "sudo cat /home/docker/cp-test_multinode-296251-m03_multinode-296251.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296251 cp multinode-296251-m03:/home/docker/cp-test.txt multinode-296251-m02:/home/docker/cp-test_multinode-296251-m03_multinode-296251-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296251 ssh -n multinode-296251-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296251 ssh -n multinode-296251-m02 "sudo cat /home/docker/cp-test_multinode-296251-m03_multinode-296251-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.41s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296251 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-296251 node stop m03: (1.19164882s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296251 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-296251 status: exit status 7 (472.060036ms)

                                                
                                                
-- stdout --
	multinode-296251
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-296251-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-296251-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296251 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-296251 status --alsologtostderr: exit status 7 (481.71491ms)

                                                
                                                
-- stdout --
	multinode-296251
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-296251-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-296251-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 12:20:34.404942  775697 out.go:360] Setting OutFile to fd 1 ...
	I0908 12:20:34.405202  775697 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:20:34.405211  775697 out.go:374] Setting ErrFile to fd 2...
	I0908 12:20:34.405215  775697 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:20:34.405448  775697 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-614854/.minikube/bin
	I0908 12:20:34.405629  775697 out.go:368] Setting JSON to false
	I0908 12:20:34.405661  775697 mustload.go:65] Loading cluster: multinode-296251
	I0908 12:20:34.405717  775697 notify.go:220] Checking for updates...
	I0908 12:20:34.406087  775697 config.go:182] Loaded profile config "multinode-296251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 12:20:34.406119  775697 status.go:174] checking status of multinode-296251 ...
	I0908 12:20:34.406652  775697 cli_runner.go:164] Run: docker container inspect multinode-296251 --format={{.State.Status}}
	I0908 12:20:34.425191  775697 status.go:371] multinode-296251 host status = "Running" (err=<nil>)
	I0908 12:20:34.425221  775697 host.go:66] Checking if "multinode-296251" exists ...
	I0908 12:20:34.425488  775697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-296251
	I0908 12:20:34.444108  775697 host.go:66] Checking if "multinode-296251" exists ...
	I0908 12:20:34.444448  775697 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 12:20:34.444518  775697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-296251
	I0908 12:20:34.462597  775697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33273 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/multinode-296251/id_rsa Username:docker}
	I0908 12:20:34.549446  775697 ssh_runner.go:195] Run: systemctl --version
	I0908 12:20:34.553935  775697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 12:20:34.565430  775697 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:20:34.615738  775697 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:63 SystemTime:2025-09-08 12:20:34.606252582 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 12:20:34.616391  775697 kubeconfig.go:125] found "multinode-296251" server: "https://192.168.67.2:8443"
	I0908 12:20:34.616434  775697 api_server.go:166] Checking apiserver status ...
	I0908 12:20:34.616485  775697 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 12:20:34.628390  775697 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1563/cgroup
	I0908 12:20:34.638139  775697 api_server.go:182] apiserver freezer: "6:freezer:/docker/0dcead745d5c7e6bc7c1b5becf2b2bf78deda1d8c261497a4ecd13f558f3fd8b/crio/crio-1b81fd1befd40ce239f22118db6f82db37b9aa921dcb10fa57ed878fe4e9cbbb"
	I0908 12:20:34.638215  775697 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/0dcead745d5c7e6bc7c1b5becf2b2bf78deda1d8c261497a4ecd13f558f3fd8b/crio/crio-1b81fd1befd40ce239f22118db6f82db37b9aa921dcb10fa57ed878fe4e9cbbb/freezer.state
	I0908 12:20:34.646901  775697 api_server.go:204] freezer state: "THAWED"
	I0908 12:20:34.646929  775697 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0908 12:20:34.651092  775697 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0908 12:20:34.651126  775697 status.go:463] multinode-296251 apiserver status = Running (err=<nil>)
	I0908 12:20:34.651144  775697 status.go:176] multinode-296251 status: &{Name:multinode-296251 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 12:20:34.651167  775697 status.go:174] checking status of multinode-296251-m02 ...
	I0908 12:20:34.651443  775697 cli_runner.go:164] Run: docker container inspect multinode-296251-m02 --format={{.State.Status}}
	I0908 12:20:34.669919  775697 status.go:371] multinode-296251-m02 host status = "Running" (err=<nil>)
	I0908 12:20:34.669947  775697 host.go:66] Checking if "multinode-296251-m02" exists ...
	I0908 12:20:34.670247  775697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-296251-m02
	I0908 12:20:34.688925  775697 host.go:66] Checking if "multinode-296251-m02" exists ...
	I0908 12:20:34.689221  775697 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 12:20:34.689259  775697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-296251-m02
	I0908 12:20:34.708947  775697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33278 SSHKeyPath:/home/jenkins/minikube-integration/21512-614854/.minikube/machines/multinode-296251-m02/id_rsa Username:docker}
	I0908 12:20:34.801291  775697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 12:20:34.813846  775697 status.go:176] multinode-296251-m02 status: &{Name:multinode-296251-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0908 12:20:34.813882  775697 status.go:174] checking status of multinode-296251-m03 ...
	I0908 12:20:34.814175  775697 cli_runner.go:164] Run: docker container inspect multinode-296251-m03 --format={{.State.Status}}
	I0908 12:20:34.832916  775697 status.go:371] multinode-296251-m03 host status = "Stopped" (err=<nil>)
	I0908 12:20:34.832947  775697 status.go:384] host is not running, skipping remaining checks
	I0908 12:20:34.832955  775697 status.go:176] multinode-296251-m03 status: &{Name:multinode-296251-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.15s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296251 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-296251 node start m03 -v=5 --alsologtostderr: (6.728718465s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296251 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.42s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (80.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-296251
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-296251
E0908 12:20:56.269370  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-296251: (24.817135738s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-296251 --wait=true -v=5 --alsologtostderr
E0908 12:21:24.585081  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/functional-982703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-296251 --wait=true -v=5 --alsologtostderr: (55.33257446s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-296251
--- PASS: TestMultiNode/serial/RestartKeepsNodes (80.26s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296251 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-296251 node delete m03: (4.74206713s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296251 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.33s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296251 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-296251 stop: (23.643136587s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296251 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-296251 status: exit status 7 (97.486486ms)

                                                
                                                
-- stdout --
	multinode-296251
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-296251-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296251 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-296251 status --alsologtostderr: exit status 7 (94.580613ms)

                                                
                                                
-- stdout --
	multinode-296251
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-296251-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 12:22:31.632430  785343 out.go:360] Setting OutFile to fd 1 ...
	I0908 12:22:31.632691  785343 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:22:31.632699  785343 out.go:374] Setting ErrFile to fd 2...
	I0908 12:22:31.632703  785343 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:22:31.632913  785343 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-614854/.minikube/bin
	I0908 12:22:31.633131  785343 out.go:368] Setting JSON to false
	I0908 12:22:31.633170  785343 mustload.go:65] Loading cluster: multinode-296251
	I0908 12:22:31.633325  785343 notify.go:220] Checking for updates...
	I0908 12:22:31.633590  785343 config.go:182] Loaded profile config "multinode-296251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 12:22:31.633612  785343 status.go:174] checking status of multinode-296251 ...
	I0908 12:22:31.634071  785343 cli_runner.go:164] Run: docker container inspect multinode-296251 --format={{.State.Status}}
	I0908 12:22:31.652210  785343 status.go:371] multinode-296251 host status = "Stopped" (err=<nil>)
	I0908 12:22:31.652242  785343 status.go:384] host is not running, skipping remaining checks
	I0908 12:22:31.652273  785343 status.go:176] multinode-296251 status: &{Name:multinode-296251 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 12:22:31.652364  785343 status.go:174] checking status of multinode-296251-m02 ...
	I0908 12:22:31.652608  785343 cli_runner.go:164] Run: docker container inspect multinode-296251-m02 --format={{.State.Status}}
	I0908 12:22:31.670272  785343 status.go:371] multinode-296251-m02 host status = "Stopped" (err=<nil>)
	I0908 12:22:31.670317  785343 status.go:384] host is not running, skipping remaining checks
	I0908 12:22:31.670325  785343 status.go:176] multinode-296251-m02 status: &{Name:multinode-296251-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.84s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (56s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-296251 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E0908 12:22:47.652958  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/functional-982703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-296251 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (55.407478253s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296251 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (56.00s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (24.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-296251
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-296251-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-296251-m02 --driver=docker  --container-runtime=crio: exit status 14 (78.943872ms)

                                                
                                                
-- stdout --
	* [multinode-296251-m02] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21512-614854/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-614854/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-296251-m02' is duplicated with machine name 'multinode-296251-m02' in profile 'multinode-296251'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-296251-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-296251-m03 --driver=docker  --container-runtime=crio: (22.456358463s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-296251
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-296251: exit status 80 (284.812599ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-296251 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-296251-m03 already exists in multinode-296251-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-296251-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-296251-m03: (1.896908859s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (24.77s)

                                                
                                    
x
+
TestPreload (115.87s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-480072 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-480072 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (50.930088141s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-480072 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-480072 image pull gcr.io/k8s-minikube/busybox: (2.483993612s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-480072
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-480072: (5.820837387s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-480072 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-480072 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (54.067737161s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-480072 image list
helpers_test.go:175: Cleaning up "test-preload-480072" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-480072
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-480072: (2.340183706s)
--- PASS: TestPreload (115.87s)

                                                
                                    
x
+
TestInsufficientStorage (12.93s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-548650 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
E0908 12:26:24.584946  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/functional-982703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-548650 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.543218056s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"29f0857d-d9ea-4f62-868a-8aaaed52bcda","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-548650] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ae8b08a7-e077-470d-850c-df1d66026c8f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21512"}}
	{"specversion":"1.0","id":"8a4268cb-b2ca-4d34-924d-0de25117a456","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5a1ca8f3-ff06-4895-998c-21e24422e0c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21512-614854/kubeconfig"}}
	{"specversion":"1.0","id":"d9196e53-466b-4eab-b85f-d9aa86a85bee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-614854/.minikube"}}
	{"specversion":"1.0","id":"19e08be9-beba-4362-be34-80903603332c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"3bc54d09-fc8e-4d80-9484-8c71d8ebfd9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9100e90e-fd70-4357-9842-f68aa8d23be9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"3aeb43ff-22ff-4f9a-a26d-d86e2c2bef85","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"676e2031-aa0e-4b21-92bf-f5cc7ddd8814","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b4fe7acc-9d52-47ed-b4bc-9aac3d65a33b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"74f6c26f-a5ff-4f9b-9e97-aef69d56dec9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-548650\" primary control-plane node in \"insufficient-storage-548650\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ec8d6f51-4029-44b9-85e9-41fe96f3582c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.47-1756980985-21488 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"f9a467ba-aefb-472e-8b80-670f533a2be6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"9ab152d7-6b76-4c0f-a7e7-9daed3beb0b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-548650 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-548650 --output=json --layout=cluster: exit status 7 (276.970028ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-548650","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-548650","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0908 12:26:30.064872  806748 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-548650" does not appear in /home/jenkins/minikube-integration/21512-614854/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-548650 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-548650 --output=json --layout=cluster: exit status 7 (274.791941ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-548650","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-548650","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0908 12:26:30.341047  806846 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-548650" does not appear in /home/jenkins/minikube-integration/21512-614854/kubeconfig
	E0908 12:26:30.352418  806846 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/insufficient-storage-548650/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-548650" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-548650
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-548650: (1.837571347s)
--- PASS: TestInsufficientStorage (12.93s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (47.06s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.832051566 start -p running-upgrade-229332 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.832051566 start -p running-upgrade-229332 --memory=3072 --vm-driver=docker  --container-runtime=crio: (29.595330323s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-229332 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-229332 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (14.780425756s)
helpers_test.go:175: Cleaning up "running-upgrade-229332" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-229332
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-229332: (2.019067673s)
--- PASS: TestRunningBinaryUpgrade (47.06s)

                                                
                                    
x
+
TestKubernetesUpgrade (343.16s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-770876 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-770876 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (26.290848697s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-770876
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-770876: (5.992794568s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-770876 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-770876 status --format={{.Host}}: exit status 7 (115.09101ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-770876 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-770876 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m30.287113354s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-770876 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-770876 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-770876 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (74.446397ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-770876] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21512-614854/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-614854/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-770876
	    minikube start -p kubernetes-upgrade-770876 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7708762 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-770876 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-770876 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-770876 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (38.088997562s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-770876" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-770876
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-770876: (2.244697698s)
--- PASS: TestKubernetesUpgrade (343.16s)

                                                
                                    
x
+
TestMissingContainerUpgrade (72.38s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.4169083572 start -p missing-upgrade-758661 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.4169083572 start -p missing-upgrade-758661 --memory=3072 --driver=docker  --container-runtime=crio: (26.872775225s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-758661
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-758661
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-758661 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-758661 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (42.030410192s)
helpers_test.go:175: Cleaning up "missing-upgrade-758661" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-758661
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-758661: (2.044500746s)
--- PASS: TestMissingContainerUpgrade (72.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-266116 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-266116 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (89.09202ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-266116] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21512-614854/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-614854/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (41.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-266116 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-266116 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (40.805296487s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-266116 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (41.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (63.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.1988594311 start -p stopped-upgrade-365068 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.1988594311 start -p stopped-upgrade-365068 --memory=3072 --vm-driver=docker  --container-runtime=crio: (45.518838339s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.1988594311 -p stopped-upgrade-365068 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.1988594311 -p stopped-upgrade-365068 stop: (1.231651365s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-365068 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-365068 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (16.370598409s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (63.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (27.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-266116 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-266116 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (25.66007113s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-266116 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-266116 status -o json: exit status 2 (307.968793ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-266116","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-266116
I0908 12:27:39.572615  618620 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0908 12:27:39.572714  618620 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0908 12:27:39.610165  618620 install.go:137] /home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0908 12:27:39.610209  618620 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0908 12:27:39.610312  618620 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0908 12:27:39.610347  618620 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1847856941/002/docker-machine-driver-kvm2
I0908 12:27:39.633695  618620 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1847856941/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440] Decompressors:map[bz2:0xc000695730 gz:0xc000695738 tar:0xc0006956e0 tar.bz2:0xc0006956f0 tar.gz:0xc000695700 tar.xz:0xc000695710 tar.zst:0xc000695720 tbz2:0xc0006956f0 tgz:0xc000695700 txz:0xc000695710 tzst:0xc000695720 xz:0xc000695740 zip:0xc000695750 zst:0xc000695748] Getters:map[file:0xc0023b9610 http:0xc002387a90 https:0xc002387ae0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0908 12:27:39.633753  618620 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1847856941/002/docker-machine-driver-kvm2
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-266116: (2.001607706s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (27.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-283124 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-283124 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (162.829767ms)

                                                
                                                
-- stdout --
	* [false-283124] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21512-614854/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-614854/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 12:27:20.728769  821099 out.go:360] Setting OutFile to fd 1 ...
	I0908 12:27:20.728906  821099 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:27:20.728918  821099 out.go:374] Setting ErrFile to fd 2...
	I0908 12:27:20.728924  821099 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:27:20.729142  821099 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-614854/.minikube/bin
	I0908 12:27:20.729826  821099 out.go:368] Setting JSON to false
	I0908 12:27:20.731082  821099 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":11385,"bootTime":1757323056,"procs":276,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 12:27:20.731211  821099 start.go:140] virtualization: kvm guest
	I0908 12:27:20.733184  821099 out.go:179] * [false-283124] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 12:27:20.734463  821099 out.go:179]   - MINIKUBE_LOCATION=21512
	I0908 12:27:20.734482  821099 notify.go:220] Checking for updates...
	I0908 12:27:20.737198  821099 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 12:27:20.738596  821099 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21512-614854/kubeconfig
	I0908 12:27:20.740125  821099 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-614854/.minikube
	I0908 12:27:20.741534  821099 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 12:27:20.742869  821099 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 12:27:20.744640  821099 config.go:182] Loaded profile config "NoKubernetes-266116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0908 12:27:20.744779  821099 config.go:182] Loaded profile config "offline-crio-248556": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 12:27:20.744865  821099 config.go:182] Loaded profile config "stopped-upgrade-365068": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I0908 12:27:20.744981  821099 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 12:27:20.770970  821099 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 12:27:20.771123  821099 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:27:20.828445  821099 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:true NGoroutines:76 SystemTime:2025-09-08 12:27:20.818537915 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647984640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0908 12:27:20.828576  821099 docker.go:318] overlay module found
	I0908 12:27:20.830333  821099 out.go:179] * Using the docker driver based on user configuration
	I0908 12:27:20.831620  821099 start.go:304] selected driver: docker
	I0908 12:27:20.831638  821099 start.go:918] validating driver "docker" against <nil>
	I0908 12:27:20.831675  821099 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 12:27:20.833790  821099 out.go:203] 
	W0908 12:27:20.835020  821099 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0908 12:27:20.836218  821099 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-283124 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-283124

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-283124

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-283124

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-283124

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-283124

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-283124

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-283124

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-283124

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-283124

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-283124

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283124"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283124"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283124"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-283124

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283124"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283124"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-283124" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-283124" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-283124" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-283124" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-283124" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-283124" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-283124" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-283124" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283124"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283124"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283124"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283124"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283124"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-283124" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-283124" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-283124" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283124"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283124"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283124"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283124"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283124"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21512-614854/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Sep 2025 12:27:11 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.94.2:8443
name: NoKubernetes-266116
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21512-614854/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Sep 2025 12:27:15 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.76.2:8443
name: offline-crio-248556
contexts:
- context:
cluster: NoKubernetes-266116
extensions:
- extension:
last-update: Mon, 08 Sep 2025 12:27:11 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: context_info
namespace: default
user: NoKubernetes-266116
name: NoKubernetes-266116
- context:
cluster: offline-crio-248556
extensions:
- extension:
last-update: Mon, 08 Sep 2025 12:27:15 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: context_info
namespace: default
user: offline-crio-248556
name: offline-crio-248556
current-context: offline-crio-248556
kind: Config
preferences: {}
users:
- name: NoKubernetes-266116
user:
client-certificate: /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/NoKubernetes-266116/client.crt
client-key: /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/NoKubernetes-266116/client.key
- name: offline-crio-248556
user:
client-certificate: /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/offline-crio-248556/client.crt
client-key: /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/offline-crio-248556/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-283124

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283124"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283124"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283124"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283124"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283124"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283124"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283124"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283124"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283124"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283124"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283124"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283124"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283124"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283124"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283124"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283124"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283124"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-283124"

                                                
                                                
----------------------- debugLogs end: false-283124 [took: 3.043650727s] --------------------------------
helpers_test.go:175: Cleaning up "false-283124" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-283124
--- PASS: TestNetworkPlugins/group/false (3.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-365068
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-365068: (1.002669042s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-266116 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-266116 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (10.314102112s)
--- PASS: TestNoKubernetes/serial/Start (10.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-266116 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-266116 "sudo systemctl is-active --quiet service kubelet": exit status 1 (264.845248ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:181: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (1.098072748s)
--- PASS: TestNoKubernetes/serial/ProfileList (2.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-266116
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-266116: (1.26648269s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-266116 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-266116 --driver=docker  --container-runtime=crio: (6.661913426s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-266116 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-266116 "sudo systemctl is-active --quiet service kubelet": exit status 1 (276.330121ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestPause/serial/Start (76.68s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-355015 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-355015 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m16.679777128s)
--- PASS: TestPause/serial/Start (76.68s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (29.17s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-355015 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-355015 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.144612413s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (29.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (75.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-283124 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-283124 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m15.224842136s)
--- PASS: TestNetworkPlugins/group/auto/Start (75.23s)

                                                
                                    
x
+
TestPause/serial/Pause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-355015 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.74s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-355015 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-355015 --output=json --layout=cluster: exit status 2 (314.647784ms)

                                                
                                                
-- stdout --
	{"Name":"pause-355015","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-355015","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.66s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-355015 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.66s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.9s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-355015 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.90s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.97s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-355015 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-355015 --alsologtostderr -v=5: (2.967878718s)
--- PASS: TestPause/serial/DeletePaused (2.97s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.79s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-355015
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-355015: exit status 1 (18.750648ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-355015: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (72.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-283124 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-283124 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m12.918118009s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (72.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-283124 "pgrep -a kubelet"
I0908 12:30:37.624065  618620 config.go:182] Loaded profile config "auto-283124": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-283124 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7sqc4" [e210d491-0cdb-4f6d-8dfb-985eda034675] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-7sqc4" [e210d491-0cdb-4f6d-8dfb-985eda034675] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004520691s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-283124 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-283124 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-283124 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-4vgsk" [94498980-a33f-4618-8f64-535a5c5c7d55] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003714042s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-283124 "pgrep -a kubelet"
I0908 12:31:15.696499  618620 config.go:182] Loaded profile config "kindnet-283124": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-283124 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-9bvxb" [4c012c35-6f2d-4832-b64e-f8d576053885] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-9bvxb" [4c012c35-6f2d-4832-b64e-f8d576053885] Running
E0908 12:31:24.585751  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/functional-982703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.0042513s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-283124 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-283124 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-283124 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (50.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-283124 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-283124 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (50.418495278s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (50.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-283124 "pgrep -a kubelet"
I0908 12:32:35.360405  618620 config.go:182] Loaded profile config "custom-flannel-283124": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-283124 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vlj2c" [ea5b6836-98c5-4099-aa6f-821e0aeb1546] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-vlj2c" [ea5b6836-98c5-4099-aa6f-821e0aeb1546] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004613444s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-283124 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-283124 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-283124 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (70.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-283124 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-283124 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m10.110832471s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (70.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (56.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-283124 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-283124 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (56.4983156s)
--- PASS: TestNetworkPlugins/group/flannel/Start (56.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (70.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-283124 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-283124 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m10.773974493s)
--- PASS: TestNetworkPlugins/group/bridge/Start (70.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-mskbg" [eb8c8bee-acf6-4f6d-a892-c296b83389e5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003653946s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-283124 "pgrep -a kubelet"
I0908 12:34:05.765922  618620 config.go:182] Loaded profile config "enable-default-cni-283124": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-283124 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-nnfqb" [f23e3679-7682-493c-9886-395a453a1116] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-nnfqb" [f23e3679-7682-493c-9886-395a453a1116] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003398986s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-283124 "pgrep -a kubelet"
I0908 12:34:08.686918  618620 config.go:182] Loaded profile config "flannel-283124": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-283124 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4n5mx" [7c075ad6-7666-4bbe-980f-b3cafff26c43] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4n5mx" [7c075ad6-7666-4bbe-980f-b3cafff26c43] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003856095s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-283124 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-283124 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-283124 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-283124 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-283124 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-283124 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-283124 "pgrep -a kubelet"
I0908 12:34:34.353683  618620 config.go:182] Loaded profile config "bridge-283124": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-283124 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ppqcn" [24f7dadf-7cd3-44c7-af04-a7ce846c63ab] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-ppqcn" [24f7dadf-7cd3-44c7-af04-a7ce846c63ab] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004664438s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (53.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-896003 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-896003 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (53.032630064s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (53.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (57.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-997730 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-997730 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (57.305900363s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (57.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-283124 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-283124 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-283124 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (77.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-039958 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-039958 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m17.457227781s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (77.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-896003 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2306dc7b-b719-474c-8953-17833334f0be] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [2306dc7b-b719-474c-8953-17833334f0be] Running
E0908 12:35:37.822170  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/auto-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:35:37.828690  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/auto-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:35:37.840168  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/auto-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:35:37.861677  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/auto-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:35:37.903122  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/auto-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:35:37.984647  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/auto-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003503108s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-896003 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-896003 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0908 12:35:38.146944  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/auto-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:35:38.468391  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/auto-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:35:39.109751  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/auto-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-896003 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-896003 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-896003 --alsologtostderr -v=3: (12.095493429s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-997730 create -f testdata/busybox.yaml
E0908 12:35:39.339829  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3000d066-b77a-40cb-b4b4-a44c5d0adc59] Pending
E0908 12:35:40.391753  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/auto-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [3000d066-b77a-40cb-b4b4-a44c5d0adc59] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0908 12:35:42.953474  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/auto-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [3000d066-b77a-40cb-b4b4-a44c5d0adc59] Running
E0908 12:35:48.075419  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/auto-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.00396559s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-997730 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-997730 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-997730 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-997730 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-997730 --alsologtostderr -v=3: (11.986795978s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-896003 -n old-k8s-version-896003
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-896003 -n old-k8s-version-896003: exit status 7 (73.632448ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-896003 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (46.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-896003 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E0908 12:35:56.269869  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:35:58.317079  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/auto-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-896003 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (46.569633172s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-896003 -n old-k8s-version-896003
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (46.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-997730 -n no-preload-997730
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-997730 -n no-preload-997730: exit status 7 (79.712752ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-997730 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (52.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-997730 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0908 12:36:09.414157  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/kindnet-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:36:09.420546  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/kindnet-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:36:09.432051  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/kindnet-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:36:09.453499  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/kindnet-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:36:09.495042  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/kindnet-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:36:09.576539  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/kindnet-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:36:09.738709  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/kindnet-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:36:10.060697  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/kindnet-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:36:10.702519  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/kindnet-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:36:11.983824  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/kindnet-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:36:14.545223  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/kindnet-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:36:18.799312  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/auto-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:36:19.666977  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/kindnet-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-997730 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (52.11898834s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-997730 -n no-preload-997730
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (52.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-039958 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [cdee705c-0c6f-47df-aca6-5a79ff2b25c3] Pending
E0908 12:36:24.585436  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/functional-982703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [cdee705c-0c6f-47df-aca6-5a79ff2b25c3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [cdee705c-0c6f-47df-aca6-5a79ff2b25c3] Running
E0908 12:36:29.908386  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/kindnet-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004332296s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-039958 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-039958 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-039958 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-039958 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-039958 --alsologtostderr -v=3: (11.99133018s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-039958 -n default-k8s-diff-port-039958
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-039958 -n default-k8s-diff-port-039958: exit status 7 (83.723725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-039958 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-039958 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0908 12:36:50.390275  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/kindnet-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-039958 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (47.701211426s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-039958 -n default-k8s-diff-port-039958
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (33.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-139998 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-139998 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (33.032766127s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (33.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-139998 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-139998 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-139998 --alsologtostderr -v=3: (1.208689737s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-139998 -n newest-cni-139998
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-139998 -n newest-cni-139998: exit status 7 (73.674071ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-139998 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-139998 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0908 12:47:35.559763  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/custom-flannel-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-139998 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (14.889565105s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-139998 -n newest-cni-139998
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-139998 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-139998 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-139998 -n newest-cni-139998
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-139998 -n newest-cni-139998: exit status 2 (315.857929ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-139998 -n newest-cni-139998
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-139998 -n newest-cni-139998: exit status 2 (315.214205ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-139998 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-139998 -n newest-cni-139998
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-139998 -n newest-cni-139998
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (72.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-095356 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-095356 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m12.762353481s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (72.76s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-095356 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a4034bd1-ee40-42ed-aa3d-e011a612b5f4] Pending
helpers_test.go:352: "busybox" [a4034bd1-ee40-42ed-aa3d-e011a612b5f4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0908 12:49:02.378402  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/flannel-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [a4034bd1-ee40-42ed-aa3d-e011a612b5f4] Running
E0908 12:49:05.952087  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/enable-default-cni-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003721181s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-095356 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-095356 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-095356 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-095356 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-095356 --alsologtostderr -v=3: (11.923574392s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-095356 -n embed-certs-095356
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-095356 -n embed-certs-095356: exit status 7 (81.635325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-095356 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (52.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-095356 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0908 12:49:34.571354  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/bridge-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-095356 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (51.990140149s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-095356 -n embed-certs-095356
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (52.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (476.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-5gcsj" [a473677c-f743-48a7-b1f8-d1397a52d8e4] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0908 12:50:37.821850  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/auto-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:50:56.269736  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:51:09.414296  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/kindnet-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:51:24.585279  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/functional-982703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:52:00.887299  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/auto-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:52:19.341677  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/addons-960652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:52:32.478449  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/kindnet-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:52:35.560385  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/custom-flannel-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:53:58.625786  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/custom-flannel-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:54:02.378361  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/flannel-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:54:05.952108  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/enable-default-cni-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:54:34.571591  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/bridge-283124/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-5gcsj" [a473677c-f743-48a7-b1f8-d1397a52d8e4] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7m56.003999733s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (476.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-896003 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-896003 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-896003 -n old-k8s-version-896003
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-896003 -n old-k8s-version-896003: exit status 2 (300.879942ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-896003 -n old-k8s-version-896003
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-896003 -n old-k8s-version-896003: exit status 2 (312.027557ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-896003 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-896003 -n old-k8s-version-896003
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-896003 -n old-k8s-version-896003
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-997730 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-997730 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-997730 -n no-preload-997730
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-997730 -n no-preload-997730: exit status 2 (299.554906ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-997730 -n no-preload-997730
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-997730 -n no-preload-997730: exit status 2 (305.578323ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-997730 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-997730 -n no-preload-997730
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-997730 -n no-preload-997730
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-039958 image list --format=json
E0908 12:55:39.945609  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/no-preload-997730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-039958 --alsologtostderr -v=1
E0908 12:55:40.267921  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/no-preload-997730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-039958 -n default-k8s-diff-port-039958
E0908 12:55:40.909738  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/no-preload-997730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-039958 -n default-k8s-diff-port-039958: exit status 2 (312.025678ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-039958 -n default-k8s-diff-port-039958
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-039958 -n default-k8s-diff-port-039958: exit status 2 (317.667525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-039958 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-039958 -n default-k8s-diff-port-039958
E0908 12:55:42.191821  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/no-preload-997730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-039958 -n default-k8s-diff-port-039958
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-5gcsj" [a473677c-f743-48a7-b1f8-d1397a52d8e4] Running
E0908 12:58:12.859645  618620 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/old-k8s-version-896003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003876451s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-095356 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-095356 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-095356 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-095356 -n embed-certs-095356
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-095356 -n embed-certs-095356: exit status 2 (303.774656ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-095356 -n embed-certs-095356
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-095356 -n embed-certs-095356: exit status 2 (300.164795ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-095356 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-095356 -n embed-certs-095356
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-095356 -n embed-certs-095356
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.72s)

                                                
                                    

Test skip (27/325)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.29s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-960652 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.29s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-283124 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-283124

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-283124

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-283124

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-283124

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-283124

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-283124

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-283124

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-283124

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-283124

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-283124

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283124"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283124"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283124"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-283124

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283124"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283124"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-283124" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-283124" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-283124" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-283124" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-283124" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-283124" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-283124" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-283124" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283124"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283124"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283124"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283124"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283124"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-283124" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-283124" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-283124" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283124"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283124"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283124"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283124"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283124"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21512-614854/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Sep 2025 12:27:11 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.94.2:8443
name: NoKubernetes-266116
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21512-614854/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Sep 2025 12:27:15 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.76.2:8443
name: offline-crio-248556
contexts:
- context:
cluster: NoKubernetes-266116
extensions:
- extension:
last-update: Mon, 08 Sep 2025 12:27:11 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: context_info
namespace: default
user: NoKubernetes-266116
name: NoKubernetes-266116
- context:
cluster: offline-crio-248556
extensions:
- extension:
last-update: Mon, 08 Sep 2025 12:27:15 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: context_info
namespace: default
user: offline-crio-248556
name: offline-crio-248556
current-context: offline-crio-248556
kind: Config
preferences: {}
users:
- name: NoKubernetes-266116
user:
client-certificate: /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/NoKubernetes-266116/client.crt
client-key: /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/NoKubernetes-266116/client.key
- name: offline-crio-248556
user:
client-certificate: /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/offline-crio-248556/client.crt
client-key: /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/offline-crio-248556/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-283124

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283124"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283124"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283124"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283124"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283124"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283124"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283124"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283124"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283124"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283124"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283124"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283124"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283124"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283124"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283124"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283124"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283124"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-283124"

                                                
                                                
----------------------- debugLogs end: kubenet-283124 [took: 3.647039036s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-283124" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-283124
--- SKIP: TestNetworkPlugins/group/kubenet (3.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-283124 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-283124

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-283124

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-283124

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-283124

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-283124

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-283124

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-283124

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-283124

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-283124

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-283124

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283124"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283124"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283124"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-283124

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283124"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283124"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-283124" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-283124" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-283124" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-283124" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-283124" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-283124" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-283124" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-283124" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283124"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283124"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283124"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283124"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283124"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-283124

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-283124

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-283124" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-283124" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-283124

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-283124

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-283124" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-283124" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-283124" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-283124" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-283124" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283124"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283124"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283124"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283124"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283124"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21512-614854/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Sep 2025 12:27:11 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.94.2:8443
name: NoKubernetes-266116
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21512-614854/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Sep 2025 12:27:15 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.76.2:8443
name: offline-crio-248556
contexts:
- context:
cluster: NoKubernetes-266116
extensions:
- extension:
last-update: Mon, 08 Sep 2025 12:27:11 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: context_info
namespace: default
user: NoKubernetes-266116
name: NoKubernetes-266116
- context:
cluster: offline-crio-248556
extensions:
- extension:
last-update: Mon, 08 Sep 2025 12:27:15 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: context_info
namespace: default
user: offline-crio-248556
name: offline-crio-248556
current-context: offline-crio-248556
kind: Config
preferences: {}
users:
- name: NoKubernetes-266116
user:
client-certificate: /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/NoKubernetes-266116/client.crt
client-key: /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/NoKubernetes-266116/client.key
- name: offline-crio-248556
user:
client-certificate: /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/offline-crio-248556/client.crt
client-key: /home/jenkins/minikube-integration/21512-614854/.minikube/profiles/offline-crio-248556/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-283124

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283124"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283124"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283124"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283124"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283124"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283124"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283124"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283124"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283124"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283124"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283124"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283124"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283124"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283124"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283124"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283124"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283124"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-283124" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-283124"

                                                
                                                
----------------------- debugLogs end: cilium-283124 [took: 3.605971952s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-283124" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-283124
--- SKIP: TestNetworkPlugins/group/cilium (3.78s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-173021" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-173021
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard