Test Report: Docker_Linux_crio 21409

                    
                      0aa34a444c66e47b3763835c9f1ccee8527d3e22:2025-09-04:41276
                    
                

Test fail (16/326)

x
+
TestAddons/parallel/Ingress (153.59s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-306757 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-306757 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-306757 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [3972f4aa-32a7-4e83-a000-282db0311811] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [3972f4aa-32a7-4e83-a000-282db0311811] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003780614s
I0904 06:03:51.723307 1520716 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-306757 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-306757 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.43169627s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-306757 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-306757 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-306757
helpers_test.go:243: (dbg) docker inspect addons-306757:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1897291195d7a2a68643143a3accf64d5a2250a3dd3808236a1c1e6d41d54801",
	        "Created": "2025-09-04T06:00:46.02292929Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1522595,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-04T06:00:46.052789301Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6f7d8b3ae805e64eb4efe058a75d43d384fe5989473cee7f8e24ea90eca28309",
	        "ResolvConfPath": "/var/lib/docker/containers/1897291195d7a2a68643143a3accf64d5a2250a3dd3808236a1c1e6d41d54801/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1897291195d7a2a68643143a3accf64d5a2250a3dd3808236a1c1e6d41d54801/hostname",
	        "HostsPath": "/var/lib/docker/containers/1897291195d7a2a68643143a3accf64d5a2250a3dd3808236a1c1e6d41d54801/hosts",
	        "LogPath": "/var/lib/docker/containers/1897291195d7a2a68643143a3accf64d5a2250a3dd3808236a1c1e6d41d54801/1897291195d7a2a68643143a3accf64d5a2250a3dd3808236a1c1e6d41d54801-json.log",
	        "Name": "/addons-306757",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-306757:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-306757",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1897291195d7a2a68643143a3accf64d5a2250a3dd3808236a1c1e6d41d54801",
	                "LowerDir": "/var/lib/docker/overlay2/8a75b45e2da50416b5986fe8742db10e79d5d6121cd1c3abf812068afe4085ee-init/diff:/var/lib/docker/overlay2/00af8677cb60c76ca825d07bd2d1267a5f0b2d8d1147a86a8eb7a1b8e0189af8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8a75b45e2da50416b5986fe8742db10e79d5d6121cd1c3abf812068afe4085ee/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8a75b45e2da50416b5986fe8742db10e79d5d6121cd1c3abf812068afe4085ee/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8a75b45e2da50416b5986fe8742db10e79d5d6121cd1c3abf812068afe4085ee/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-306757",
	                "Source": "/var/lib/docker/volumes/addons-306757/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-306757",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-306757",
	                "name.minikube.sigs.k8s.io": "addons-306757",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b45f273fe29ac38f77a75a755e4114e6399c61a93f903ec4dc4ba87d64e7fde1",
	            "SandboxKey": "/var/run/docker/netns/b45f273fe29a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33959"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33960"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33963"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33961"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33962"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-306757": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:21:0e:59:97:9c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "82f9e6307064f58f2ecdba8e1951d766c737f1a4e4caa9a4325031183c0eb10e",
	                    "EndpointID": "49381a60b2eca57df4b91c0e5c9a3c9ff462d933e6ea13a3684e882611ea756b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-306757",
	                        "1897291195d7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-306757 -n addons-306757
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-306757 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-306757 logs -n 25: (1.207140997s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-285738 --alsologtostderr --binary-mirror http://127.0.0.1:33975 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-285738 │ jenkins │ v1.36.0 │ 04 Sep 25 06:00 UTC │                     │
	│ delete  │ -p binary-mirror-285738                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-285738 │ jenkins │ v1.36.0 │ 04 Sep 25 06:00 UTC │ 04 Sep 25 06:00 UTC │
	│ addons  │ disable dashboard -p addons-306757                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-306757        │ jenkins │ v1.36.0 │ 04 Sep 25 06:00 UTC │                     │
	│ addons  │ enable dashboard -p addons-306757                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-306757        │ jenkins │ v1.36.0 │ 04 Sep 25 06:00 UTC │                     │
	│ start   │ -p addons-306757 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-306757        │ jenkins │ v1.36.0 │ 04 Sep 25 06:00 UTC │ 04 Sep 25 06:02 UTC │
	│ addons  │ addons-306757 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-306757        │ jenkins │ v1.36.0 │ 04 Sep 25 06:02 UTC │ 04 Sep 25 06:02 UTC │
	│ addons  │ addons-306757 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-306757        │ jenkins │ v1.36.0 │ 04 Sep 25 06:03 UTC │ 04 Sep 25 06:03 UTC │
	│ addons  │ enable headlamp -p addons-306757 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-306757        │ jenkins │ v1.36.0 │ 04 Sep 25 06:03 UTC │ 04 Sep 25 06:03 UTC │
	│ addons  │ addons-306757 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-306757        │ jenkins │ v1.36.0 │ 04 Sep 25 06:03 UTC │ 04 Sep 25 06:03 UTC │
	│ addons  │ addons-306757 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-306757        │ jenkins │ v1.36.0 │ 04 Sep 25 06:03 UTC │ 04 Sep 25 06:03 UTC │
	│ addons  │ addons-306757 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-306757        │ jenkins │ v1.36.0 │ 04 Sep 25 06:03 UTC │ 04 Sep 25 06:03 UTC │
	│ ip      │ addons-306757 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-306757        │ jenkins │ v1.36.0 │ 04 Sep 25 06:03 UTC │ 04 Sep 25 06:03 UTC │
	│ addons  │ addons-306757 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-306757        │ jenkins │ v1.36.0 │ 04 Sep 25 06:03 UTC │ 04 Sep 25 06:03 UTC │
	│ addons  │ addons-306757 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-306757        │ jenkins │ v1.36.0 │ 04 Sep 25 06:03 UTC │ 04 Sep 25 06:03 UTC │
	│ addons  │ addons-306757 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-306757        │ jenkins │ v1.36.0 │ 04 Sep 25 06:03 UTC │ 04 Sep 25 06:03 UTC │
	│ addons  │ addons-306757 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-306757        │ jenkins │ v1.36.0 │ 04 Sep 25 06:03 UTC │ 04 Sep 25 06:03 UTC │
	│ addons  │ addons-306757 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-306757        │ jenkins │ v1.36.0 │ 04 Sep 25 06:03 UTC │ 04 Sep 25 06:03 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-306757                                                                                                                                                                                                                                                                                                                                                                                           │ addons-306757        │ jenkins │ v1.36.0 │ 04 Sep 25 06:03 UTC │ 04 Sep 25 06:03 UTC │
	│ addons  │ addons-306757 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-306757        │ jenkins │ v1.36.0 │ 04 Sep 25 06:03 UTC │ 04 Sep 25 06:03 UTC │
	│ ssh     │ addons-306757 ssh cat /opt/local-path-provisioner/pvc-3834d7e9-4691-4682-8525-fbde797f55c6_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-306757        │ jenkins │ v1.36.0 │ 04 Sep 25 06:03 UTC │ 04 Sep 25 06:03 UTC │
	│ addons  │ addons-306757 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-306757        │ jenkins │ v1.36.0 │ 04 Sep 25 06:03 UTC │ 04 Sep 25 06:03 UTC │
	│ ssh     │ addons-306757 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-306757        │ jenkins │ v1.36.0 │ 04 Sep 25 06:03 UTC │                     │
	│ addons  │ addons-306757 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-306757        │ jenkins │ v1.36.0 │ 04 Sep 25 06:04 UTC │ 04 Sep 25 06:04 UTC │
	│ addons  │ addons-306757 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-306757        │ jenkins │ v1.36.0 │ 04 Sep 25 06:04 UTC │ 04 Sep 25 06:04 UTC │
	│ ip      │ addons-306757 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-306757        │ jenkins │ v1.36.0 │ 04 Sep 25 06:06 UTC │ 04 Sep 25 06:06 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/04 06:00:21
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 06:00:21.670795 1521981 out.go:360] Setting OutFile to fd 1 ...
	I0904 06:00:21.671064 1521981 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:00:21.671076 1521981 out.go:374] Setting ErrFile to fd 2...
	I0904 06:00:21.671082 1521981 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:00:21.671289 1521981 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1516970/.minikube/bin
	I0904 06:00:21.672021 1521981 out.go:368] Setting JSON to false
	I0904 06:00:21.672987 1521981 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":13372,"bootTime":1756952250,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 06:00:21.673112 1521981 start.go:140] virtualization: kvm guest
	I0904 06:00:21.675017 1521981 out.go:179] * [addons-306757] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 06:00:21.676307 1521981 out.go:179]   - MINIKUBE_LOCATION=21409
	I0904 06:00:21.676322 1521981 notify.go:220] Checking for updates...
	I0904 06:00:21.678460 1521981 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 06:00:21.679759 1521981 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-1516970/kubeconfig
	I0904 06:00:21.680975 1521981 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1516970/.minikube
	I0904 06:00:21.682064 1521981 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 06:00:21.683104 1521981 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 06:00:21.684250 1521981 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 06:00:21.706383 1521981 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0904 06:00:21.706506 1521981 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 06:00:21.753218 1521981 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:48 SystemTime:2025-09-04 06:00:21.743584534 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 06:00:21.753310 1521981 docker.go:318] overlay module found
	I0904 06:00:21.754940 1521981 out.go:179] * Using the docker driver based on user configuration
	I0904 06:00:21.756096 1521981 start.go:304] selected driver: docker
	I0904 06:00:21.756110 1521981 start.go:918] validating driver "docker" against <nil>
	I0904 06:00:21.756123 1521981 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 06:00:21.756908 1521981 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 06:00:21.802822 1521981 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:48 SystemTime:2025-09-04 06:00:21.794591415 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 06:00:21.802993 1521981 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0904 06:00:21.803223 1521981 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 06:00:21.804835 1521981 out.go:179] * Using Docker driver with root privileges
	I0904 06:00:21.806129 1521981 cni.go:84] Creating CNI manager for ""
	I0904 06:00:21.806195 1521981 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0904 06:00:21.806207 1521981 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0904 06:00:21.806286 1521981 start.go:348] cluster config:
	{Name:addons-306757 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-306757 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0904 06:00:21.807588 1521981 out.go:179] * Starting "addons-306757" primary control-plane node in "addons-306757" cluster
	I0904 06:00:21.808865 1521981 cache.go:123] Beginning downloading kic base image for docker with crio
	I0904 06:00:21.810003 1521981 out.go:179] * Pulling base image v0.0.47-1756936034-21409 ...
	I0904 06:00:21.811388 1521981 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 06:00:21.811419 1521981 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc in local docker daemon
	I0904 06:00:21.811423 1521981 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21409-1516970/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0904 06:00:21.811535 1521981 cache.go:58] Caching tarball of preloaded images
	I0904 06:00:21.811636 1521981 preload.go:172] Found /home/jenkins/minikube-integration/21409-1516970/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0904 06:00:21.811653 1521981 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0904 06:00:21.812080 1521981 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/config.json ...
	I0904 06:00:21.812110 1521981 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/config.json: {Name:mk8f1abe4861e37115b8921855f521a350728933 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:00:21.829141 1521981 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc to local cache
	I0904 06:00:21.829256 1521981 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc in local cache directory
	I0904 06:00:21.829272 1521981 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc in local cache directory, skipping pull
	I0904 06:00:21.829279 1521981 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc exists in cache, skipping pull
	I0904 06:00:21.829286 1521981 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc as a tarball
	I0904 06:00:21.829293 1521981 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc from local cache
	I0904 06:00:33.993218 1521981 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc from cached tarball
	I0904 06:00:33.993265 1521981 cache.go:232] Successfully downloaded all kic artifacts
	I0904 06:00:33.993307 1521981 start.go:360] acquireMachinesLock for addons-306757: {Name:mkcc65de8f88a1bd5910488c4c205f960cf98b77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 06:00:33.993438 1521981 start.go:364] duration metric: took 105.705µs to acquireMachinesLock for "addons-306757"
	I0904 06:00:33.993475 1521981 start.go:93] Provisioning new machine with config: &{Name:addons-306757 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-306757 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0904 06:00:33.993566 1521981 start.go:125] createHost starting for "" (driver="docker")
	I0904 06:00:33.996293 1521981 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0904 06:00:33.996599 1521981 start.go:159] libmachine.API.Create for "addons-306757" (driver="docker")
	I0904 06:00:33.996646 1521981 client.go:168] LocalClient.Create starting
	I0904 06:00:33.996779 1521981 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca.pem
	I0904 06:00:34.443348 1521981 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/cert.pem
	I0904 06:00:34.562143 1521981 cli_runner.go:164] Run: docker network inspect addons-306757 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0904 06:00:34.579366 1521981 cli_runner.go:211] docker network inspect addons-306757 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0904 06:00:34.579449 1521981 network_create.go:284] running [docker network inspect addons-306757] to gather additional debugging logs...
	I0904 06:00:34.579465 1521981 cli_runner.go:164] Run: docker network inspect addons-306757
	W0904 06:00:34.596541 1521981 cli_runner.go:211] docker network inspect addons-306757 returned with exit code 1
	I0904 06:00:34.596575 1521981 network_create.go:287] error running [docker network inspect addons-306757]: docker network inspect addons-306757: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-306757 not found
	I0904 06:00:34.596589 1521981 network_create.go:289] output of [docker network inspect addons-306757]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-306757 not found
	
	** /stderr **
	I0904 06:00:34.596682 1521981 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0904 06:00:34.614110 1521981 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0016008d0}
	I0904 06:00:34.614160 1521981 network_create.go:124] attempt to create docker network addons-306757 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0904 06:00:34.614211 1521981 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-306757 addons-306757
	I0904 06:00:34.664762 1521981 network_create.go:108] docker network addons-306757 192.168.49.0/24 created
	I0904 06:00:34.664796 1521981 kic.go:121] calculated static IP "192.168.49.2" for the "addons-306757" container
	I0904 06:00:34.664853 1521981 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0904 06:00:34.680797 1521981 cli_runner.go:164] Run: docker volume create addons-306757 --label name.minikube.sigs.k8s.io=addons-306757 --label created_by.minikube.sigs.k8s.io=true
	I0904 06:00:34.699100 1521981 oci.go:103] Successfully created a docker volume addons-306757
	I0904 06:00:34.699243 1521981 cli_runner.go:164] Run: docker run --rm --name addons-306757-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-306757 --entrypoint /usr/bin/test -v addons-306757:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc -d /var/lib
	I0904 06:00:41.631939 1521981 cli_runner.go:217] Completed: docker run --rm --name addons-306757-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-306757 --entrypoint /usr/bin/test -v addons-306757:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc -d /var/lib: (6.932640299s)
	I0904 06:00:41.631972 1521981 oci.go:107] Successfully prepared a docker volume addons-306757
	I0904 06:00:41.632001 1521981 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 06:00:41.632038 1521981 kic.go:194] Starting extracting preloaded images to volume ...
	I0904 06:00:41.632113 1521981 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-1516970/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-306757:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc -I lz4 -xf /preloaded.tar -C /extractDir
	I0904 06:00:45.959957 1521981 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-1516970/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-306757:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc -I lz4 -xf /preloaded.tar -C /extractDir: (4.327786378s)
	I0904 06:00:45.959997 1521981 kic.go:203] duration metric: took 4.327956316s to extract preloaded images to volume ...
	W0904 06:00:45.960125 1521981 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0904 06:00:45.960222 1521981 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0904 06:00:46.008161 1521981 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-306757 --name addons-306757 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-306757 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-306757 --network addons-306757 --ip 192.168.49.2 --volume addons-306757:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc
	I0904 06:00:46.255452 1521981 cli_runner.go:164] Run: docker container inspect addons-306757 --format={{.State.Running}}
	I0904 06:00:46.273396 1521981 cli_runner.go:164] Run: docker container inspect addons-306757 --format={{.State.Status}}
	I0904 06:00:46.291818 1521981 cli_runner.go:164] Run: docker exec addons-306757 stat /var/lib/dpkg/alternatives/iptables
	I0904 06:00:46.333378 1521981 oci.go:144] the created container "addons-306757" has a running status.
	I0904 06:00:46.333417 1521981 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-1516970/.minikube/machines/addons-306757/id_rsa...
	I0904 06:00:47.233465 1521981 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-1516970/.minikube/machines/addons-306757/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0904 06:00:47.255681 1521981 cli_runner.go:164] Run: docker container inspect addons-306757 --format={{.State.Status}}
	I0904 06:00:47.271508 1521981 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0904 06:00:47.271528 1521981 kic_runner.go:114] Args: [docker exec --privileged addons-306757 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0904 06:00:47.308691 1521981 cli_runner.go:164] Run: docker container inspect addons-306757 --format={{.State.Status}}
	I0904 06:00:47.325032 1521981 machine.go:93] provisionDockerMachine start ...
	I0904 06:00:47.325146 1521981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306757
	I0904 06:00:47.341644 1521981 main.go:141] libmachine: Using SSH client type: native
	I0904 06:00:47.341884 1521981 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33959 <nil> <nil>}
	I0904 06:00:47.341896 1521981 main.go:141] libmachine: About to run SSH command:
	hostname
	I0904 06:00:47.459286 1521981 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-306757
	
	I0904 06:00:47.459316 1521981 ubuntu.go:182] provisioning hostname "addons-306757"
	I0904 06:00:47.459374 1521981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306757
	I0904 06:00:47.476426 1521981 main.go:141] libmachine: Using SSH client type: native
	I0904 06:00:47.476659 1521981 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33959 <nil> <nil>}
	I0904 06:00:47.476680 1521981 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-306757 && echo "addons-306757" | sudo tee /etc/hostname
	I0904 06:00:47.606930 1521981 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-306757
	
	I0904 06:00:47.607005 1521981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306757
	I0904 06:00:47.624724 1521981 main.go:141] libmachine: Using SSH client type: native
	I0904 06:00:47.624936 1521981 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33959 <nil> <nil>}
	I0904 06:00:47.624953 1521981 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-306757' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-306757/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-306757' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0904 06:00:47.739957 1521981 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 06:00:47.739995 1521981 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-1516970/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-1516970/.minikube}
	I0904 06:00:47.740021 1521981 ubuntu.go:190] setting up certificates
	I0904 06:00:47.740042 1521981 provision.go:84] configureAuth start
	I0904 06:00:47.740117 1521981 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-306757
	I0904 06:00:47.757376 1521981 provision.go:143] copyHostCerts
	I0904 06:00:47.757470 1521981 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.pem (1082 bytes)
	I0904 06:00:47.757610 1521981 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-1516970/.minikube/cert.pem (1123 bytes)
	I0904 06:00:47.757710 1521981 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-1516970/.minikube/key.pem (1675 bytes)
	I0904 06:00:47.757805 1521981 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca-key.pem org=jenkins.addons-306757 san=[127.0.0.1 192.168.49.2 addons-306757 localhost minikube]
	I0904 06:00:47.952392 1521981 provision.go:177] copyRemoteCerts
	I0904 06:00:47.952463 1521981 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0904 06:00:47.952512 1521981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306757
	I0904 06:00:47.969463 1521981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33959 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/addons-306757/id_rsa Username:docker}
	I0904 06:00:48.056591 1521981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0904 06:00:48.078913 1521981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0904 06:00:48.100143 1521981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0904 06:00:48.121222 1521981 provision.go:87] duration metric: took 381.160028ms to configureAuth
	I0904 06:00:48.121247 1521981 ubuntu.go:206] setting minikube options for container-runtime
	I0904 06:00:48.121402 1521981 config.go:182] Loaded profile config "addons-306757": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 06:00:48.121514 1521981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306757
	I0904 06:00:48.138700 1521981 main.go:141] libmachine: Using SSH client type: native
	I0904 06:00:48.138932 1521981 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 33959 <nil> <nil>}
	I0904 06:00:48.138950 1521981 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0904 06:00:48.346740 1521981 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0904 06:00:48.346771 1521981 machine.go:96] duration metric: took 1.021714429s to provisionDockerMachine
	I0904 06:00:48.346785 1521981 client.go:171] duration metric: took 14.35012755s to LocalClient.Create
	I0904 06:00:48.346813 1521981 start.go:167] duration metric: took 14.350216424s to libmachine.API.Create "addons-306757"
	I0904 06:00:48.346825 1521981 start.go:293] postStartSetup for "addons-306757" (driver="docker")
	I0904 06:00:48.346838 1521981 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0904 06:00:48.346891 1521981 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0904 06:00:48.346937 1521981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306757
	I0904 06:00:48.363918 1521981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33959 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/addons-306757/id_rsa Username:docker}
	I0904 06:00:48.452885 1521981 ssh_runner.go:195] Run: cat /etc/os-release
	I0904 06:00:48.456072 1521981 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0904 06:00:48.456100 1521981 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0904 06:00:48.456108 1521981 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0904 06:00:48.456115 1521981 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0904 06:00:48.456126 1521981 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1516970/.minikube/addons for local assets ...
	I0904 06:00:48.456195 1521981 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1516970/.minikube/files for local assets ...
	I0904 06:00:48.456223 1521981 start.go:296] duration metric: took 109.389235ms for postStartSetup
	I0904 06:00:48.456523 1521981 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-306757
	I0904 06:00:48.474499 1521981 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/config.json ...
	I0904 06:00:48.474763 1521981 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 06:00:48.474806 1521981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306757
	I0904 06:00:48.492571 1521981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33959 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/addons-306757/id_rsa Username:docker}
	I0904 06:00:48.576799 1521981 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0904 06:00:48.581009 1521981 start.go:128] duration metric: took 14.587425423s to createHost
	I0904 06:00:48.581033 1521981 start.go:83] releasing machines lock for "addons-306757", held for 14.587578952s
	I0904 06:00:48.581103 1521981 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-306757
	I0904 06:00:48.599673 1521981 ssh_runner.go:195] Run: cat /version.json
	I0904 06:00:48.599754 1521981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306757
	I0904 06:00:48.599766 1521981 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0904 06:00:48.599856 1521981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306757
	I0904 06:00:48.618177 1521981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33959 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/addons-306757/id_rsa Username:docker}
	I0904 06:00:48.618561 1521981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33959 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/addons-306757/id_rsa Username:docker}
	I0904 06:00:48.699215 1521981 ssh_runner.go:195] Run: systemctl --version
	I0904 06:00:48.770511 1521981 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0904 06:00:48.909745 1521981 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0904 06:00:48.914173 1521981 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 06:00:48.931729 1521981 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0904 06:00:48.931837 1521981 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 06:00:48.957992 1521981 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0904 06:00:48.958021 1521981 start.go:495] detecting cgroup driver to use...
	I0904 06:00:48.958059 1521981 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0904 06:00:48.958119 1521981 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0904 06:00:48.972130 1521981 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 06:00:48.982427 1521981 docker.go:218] disabling cri-docker service (if available) ...
	I0904 06:00:48.982481 1521981 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0904 06:00:48.994763 1521981 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0904 06:00:49.007674 1521981 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0904 06:00:49.089473 1521981 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0904 06:00:49.166733 1521981 docker.go:234] disabling docker service ...
	I0904 06:00:49.166807 1521981 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0904 06:00:49.184674 1521981 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0904 06:00:49.195000 1521981 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0904 06:00:49.271616 1521981 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0904 06:00:49.356339 1521981 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0904 06:00:49.366843 1521981 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 06:00:49.380854 1521981 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0904 06:00:49.380922 1521981 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:00:49.389602 1521981 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0904 06:00:49.389659 1521981 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:00:49.398467 1521981 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:00:49.407339 1521981 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:00:49.416262 1521981 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0904 06:00:49.424334 1521981 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:00:49.433057 1521981 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:00:49.447540 1521981 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:00:49.456677 1521981 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0904 06:00:49.465048 1521981 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0904 06:00:49.473042 1521981 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 06:00:49.538746 1521981 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0904 06:00:49.637126 1521981 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0904 06:00:49.637223 1521981 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0904 06:00:49.640819 1521981 start.go:563] Will wait 60s for crictl version
	I0904 06:00:49.640877 1521981 ssh_runner.go:195] Run: which crictl
	I0904 06:00:49.643914 1521981 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0904 06:00:49.675607 1521981 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0904 06:00:49.675709 1521981 ssh_runner.go:195] Run: crio --version
	I0904 06:00:49.709527 1521981 ssh_runner.go:195] Run: crio --version
	I0904 06:00:49.744717 1521981 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0904 06:00:49.745973 1521981 cli_runner.go:164] Run: docker network inspect addons-306757 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0904 06:00:49.763336 1521981 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0904 06:00:49.767239 1521981 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 06:00:49.777905 1521981 kubeadm.go:875] updating cluster {Name:addons-306757 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-306757 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0904 06:00:49.778009 1521981 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 06:00:49.778056 1521981 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 06:00:49.843731 1521981 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 06:00:49.843754 1521981 crio.go:433] Images already preloaded, skipping extraction
	I0904 06:00:49.843817 1521981 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 06:00:49.876051 1521981 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 06:00:49.876077 1521981 cache_images.go:85] Images are preloaded, skipping loading
	I0904 06:00:49.876088 1521981 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0904 06:00:49.876194 1521981 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-306757 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-306757 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0904 06:00:49.876276 1521981 ssh_runner.go:195] Run: crio config
	I0904 06:00:49.918991 1521981 cni.go:84] Creating CNI manager for ""
	I0904 06:00:49.919024 1521981 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0904 06:00:49.919039 1521981 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0904 06:00:49.919069 1521981 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-306757 NodeName:addons-306757 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0904 06:00:49.919256 1521981 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-306757"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0904 06:00:49.919338 1521981 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0904 06:00:49.927933 1521981 binaries.go:44] Found k8s binaries, skipping transfer
	I0904 06:00:49.928002 1521981 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0904 06:00:49.935967 1521981 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0904 06:00:49.952152 1521981 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0904 06:00:49.968722 1521981 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I0904 06:00:49.984806 1521981 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0904 06:00:49.987930 1521981 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 06:00:49.998109 1521981 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 06:00:50.076623 1521981 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 06:00:50.089189 1521981 certs.go:68] Setting up /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757 for IP: 192.168.49.2
	I0904 06:00:50.089215 1521981 certs.go:194] generating shared ca certs ...
	I0904 06:00:50.089237 1521981 certs.go:226] acquiring lock for ca certs: {Name:mk2d06825c36f44304767b415a9a93c84edb2667 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:00:50.089379 1521981 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.key
	I0904 06:00:50.187117 1521981 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.crt ...
	I0904 06:00:50.187150 1521981 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.crt: {Name:mk07fe6aeb205ee459e913511cdb7875d3557906 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:00:50.187318 1521981 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.key ...
	I0904 06:00:50.187329 1521981 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.key: {Name:mkb4c5863ed9315d731e9aac970cb7904a241ffc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:00:50.187399 1521981 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/proxy-client-ca.key
	I0904 06:00:50.629311 1521981 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-1516970/.minikube/proxy-client-ca.crt ...
	I0904 06:00:50.629346 1521981 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1516970/.minikube/proxy-client-ca.crt: {Name:mkce62331183f5e7599c7826362e5af1666c6bd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:00:50.629517 1521981 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-1516970/.minikube/proxy-client-ca.key ...
	I0904 06:00:50.629528 1521981 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1516970/.minikube/proxy-client-ca.key: {Name:mk4072b6b605543542ec2ebbe27296d66f2405fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:00:50.629615 1521981 certs.go:256] generating profile certs ...
	I0904 06:00:50.629709 1521981 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/client.key
	I0904 06:00:50.629734 1521981 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/client.crt with IP's: []
	I0904 06:00:50.886272 1521981 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/client.crt ...
	I0904 06:00:50.886311 1521981 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/client.crt: {Name:mk3d54b5f7b109ba40829b645a2a52539b57a00b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:00:50.886490 1521981 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/client.key ...
	I0904 06:00:50.886502 1521981 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/client.key: {Name:mkd4dc84d80c5b30302c021bb5c57e492d505d96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:00:50.886581 1521981 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/apiserver.key.a7a44d98
	I0904 06:00:50.886603 1521981 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/apiserver.crt.a7a44d98 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0904 06:00:51.079955 1521981 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/apiserver.crt.a7a44d98 ...
	I0904 06:00:51.079993 1521981 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/apiserver.crt.a7a44d98: {Name:mka28bd5aa92b09255990be1084143c1ee5118fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:00:51.080164 1521981 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/apiserver.key.a7a44d98 ...
	I0904 06:00:51.080177 1521981 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/apiserver.key.a7a44d98: {Name:mk12eccf2f119ec748e07cd6a408f9728bd6c4ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:00:51.080248 1521981 certs.go:381] copying /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/apiserver.crt.a7a44d98 -> /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/apiserver.crt
	I0904 06:00:51.080325 1521981 certs.go:385] copying /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/apiserver.key.a7a44d98 -> /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/apiserver.key
	I0904 06:00:51.080373 1521981 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/proxy-client.key
	I0904 06:00:51.080391 1521981 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/proxy-client.crt with IP's: []
	I0904 06:00:51.318475 1521981 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/proxy-client.crt ...
	I0904 06:00:51.318523 1521981 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/proxy-client.crt: {Name:mk7a99840679228031dfabc0ab08803f9e4232c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:00:51.318742 1521981 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/proxy-client.key ...
	I0904 06:00:51.318767 1521981 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/proxy-client.key: {Name:mk85202a923144a4bbd358ffa08fc9b07b039fed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:00:51.318982 1521981 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca-key.pem (1675 bytes)
	I0904 06:00:51.319034 1521981 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca.pem (1082 bytes)
	I0904 06:00:51.319075 1521981 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/cert.pem (1123 bytes)
	I0904 06:00:51.319111 1521981 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/key.pem (1675 bytes)
	I0904 06:00:51.319747 1521981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0904 06:00:51.342669 1521981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0904 06:00:51.363744 1521981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0904 06:00:51.385080 1521981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0904 06:00:51.406893 1521981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0904 06:00:51.428017 1521981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0904 06:00:51.449478 1521981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0904 06:00:51.471583 1521981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0904 06:00:51.492652 1521981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0904 06:00:51.513822 1521981 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0904 06:00:51.529180 1521981 ssh_runner.go:195] Run: openssl version
	I0904 06:00:51.534057 1521981 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0904 06:00:51.542231 1521981 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0904 06:00:51.545349 1521981 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  4 06:00 /usr/share/ca-certificates/minikubeCA.pem
	I0904 06:00:51.545393 1521981 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0904 06:00:51.551436 1521981 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0904 06:00:51.559748 1521981 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0904 06:00:51.562595 1521981 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0904 06:00:51.562640 1521981 kubeadm.go:392] StartCluster: {Name:addons-306757 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-306757 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 06:00:51.562721 1521981 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0904 06:00:51.562756 1521981 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0904 06:00:51.599647 1521981 cri.go:89] found id: ""
	I0904 06:00:51.599724 1521981 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0904 06:00:51.608939 1521981 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0904 06:00:51.617597 1521981 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0904 06:00:51.617649 1521981 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0904 06:00:51.625621 1521981 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0904 06:00:51.625639 1521981 kubeadm.go:157] found existing configuration files:
	
	I0904 06:00:51.625721 1521981 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0904 06:00:51.633360 1521981 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0904 06:00:51.633422 1521981 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0904 06:00:51.640685 1521981 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0904 06:00:51.648280 1521981 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0904 06:00:51.648334 1521981 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0904 06:00:51.655695 1521981 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0904 06:00:51.663324 1521981 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0904 06:00:51.663370 1521981 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0904 06:00:51.670875 1521981 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0904 06:00:51.678299 1521981 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0904 06:00:51.678357 1521981 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0904 06:00:51.685786 1521981 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0904 06:00:51.721784 1521981 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0904 06:00:51.721865 1521981 kubeadm.go:310] [preflight] Running pre-flight checks
	I0904 06:00:51.735869 1521981 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0904 06:00:51.735982 1521981 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1083-gcp
	I0904 06:00:51.736024 1521981 kubeadm.go:310] OS: Linux
	I0904 06:00:51.736087 1521981 kubeadm.go:310] CGROUPS_CPU: enabled
	I0904 06:00:51.736144 1521981 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0904 06:00:51.736184 1521981 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0904 06:00:51.736271 1521981 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0904 06:00:51.736341 1521981 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0904 06:00:51.736426 1521981 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0904 06:00:51.736499 1521981 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0904 06:00:51.736575 1521981 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0904 06:00:51.736637 1521981 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0904 06:00:51.785335 1521981 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0904 06:00:51.785501 1521981 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0904 06:00:51.785670 1521981 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0904 06:00:51.793053 1521981 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0904 06:00:51.795010 1521981 out.go:252]   - Generating certificates and keys ...
	I0904 06:00:51.795113 1521981 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0904 06:00:51.795209 1521981 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0904 06:00:51.877294 1521981 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0904 06:00:51.956624 1521981 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0904 06:00:52.131472 1521981 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0904 06:00:52.300304 1521981 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0904 06:00:52.536079 1521981 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0904 06:00:52.536247 1521981 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-306757 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0904 06:00:52.618340 1521981 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0904 06:00:52.618511 1521981 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-306757 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0904 06:00:53.158584 1521981 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0904 06:00:53.345817 1521981 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0904 06:00:53.788464 1521981 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0904 06:00:53.788535 1521981 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0904 06:00:53.827873 1521981 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0904 06:00:54.004943 1521981 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0904 06:00:54.057485 1521981 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0904 06:00:54.300074 1521981 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0904 06:00:54.371459 1521981 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0904 06:00:54.371925 1521981 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0904 06:00:54.374100 1521981 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0904 06:00:54.376317 1521981 out.go:252]   - Booting up control plane ...
	I0904 06:00:54.376399 1521981 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0904 06:00:54.376465 1521981 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0904 06:00:54.376560 1521981 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0904 06:00:54.385602 1521981 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0904 06:00:54.385694 1521981 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0904 06:00:54.391929 1521981 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0904 06:00:54.392168 1521981 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0904 06:00:54.392239 1521981 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0904 06:00:54.475715 1521981 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0904 06:00:54.475862 1521981 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0904 06:00:54.977259 1521981 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.667215ms
	I0904 06:00:54.979894 1521981 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0904 06:00:54.980016 1521981 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0904 06:00:54.980158 1521981 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0904 06:00:54.980230 1521981 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0904 06:00:56.738252 1521981 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.758064917s
	I0904 06:00:58.316213 1521981 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.336088532s
	I0904 06:00:59.982208 1521981 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 5.002054808s
	I0904 06:00:59.994154 1521981 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0904 06:01:00.007482 1521981 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0904 06:01:00.019046 1521981 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0904 06:01:00.019331 1521981 kubeadm.go:310] [mark-control-plane] Marking the node addons-306757 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0904 06:01:00.029239 1521981 kubeadm.go:310] [bootstrap-token] Using token: 1hae2s.mvntlaevqy71ouh3
	I0904 06:01:00.030775 1521981 out.go:252]   - Configuring RBAC rules ...
	I0904 06:01:00.030958 1521981 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0904 06:01:00.035436 1521981 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0904 06:01:00.044156 1521981 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0904 06:01:00.047327 1521981 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0904 06:01:00.051188 1521981 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0904 06:01:00.054393 1521981 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0904 06:01:00.388112 1521981 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0904 06:01:00.820473 1521981 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0904 06:01:01.387965 1521981 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0904 06:01:01.388902 1521981 kubeadm.go:310] 
	I0904 06:01:01.388994 1521981 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0904 06:01:01.389003 1521981 kubeadm.go:310] 
	I0904 06:01:01.389116 1521981 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0904 06:01:01.389127 1521981 kubeadm.go:310] 
	I0904 06:01:01.389162 1521981 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0904 06:01:01.389250 1521981 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0904 06:01:01.389327 1521981 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0904 06:01:01.389337 1521981 kubeadm.go:310] 
	I0904 06:01:01.389413 1521981 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0904 06:01:01.389422 1521981 kubeadm.go:310] 
	I0904 06:01:01.389485 1521981 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0904 06:01:01.389493 1521981 kubeadm.go:310] 
	I0904 06:01:01.389574 1521981 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0904 06:01:01.389676 1521981 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0904 06:01:01.389772 1521981 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0904 06:01:01.389780 1521981 kubeadm.go:310] 
	I0904 06:01:01.389908 1521981 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0904 06:01:01.390050 1521981 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0904 06:01:01.390074 1521981 kubeadm.go:310] 
	I0904 06:01:01.390210 1521981 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1hae2s.mvntlaevqy71ouh3 \
	I0904 06:01:01.390355 1521981 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2d9630fca242e1003deb76bc0b7b7c54e9b6615fdc1e764ca81723c39d5691bf \
	I0904 06:01:01.390388 1521981 kubeadm.go:310] 	--control-plane 
	I0904 06:01:01.390397 1521981 kubeadm.go:310] 
	I0904 06:01:01.390526 1521981 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0904 06:01:01.390540 1521981 kubeadm.go:310] 
	I0904 06:01:01.390636 1521981 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1hae2s.mvntlaevqy71ouh3 \
	I0904 06:01:01.390800 1521981 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2d9630fca242e1003deb76bc0b7b7c54e9b6615fdc1e764ca81723c39d5691bf 
	I0904 06:01:01.392937 1521981 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0904 06:01:01.393238 1521981 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
	I0904 06:01:01.393373 1521981 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0904 06:01:01.393402 1521981 cni.go:84] Creating CNI manager for ""
	I0904 06:01:01.393415 1521981 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0904 06:01:01.396369 1521981 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0904 06:01:01.397855 1521981 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0904 06:01:01.401850 1521981 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0904 06:01:01.401872 1521981 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0904 06:01:01.419062 1521981 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0904 06:01:01.623928 1521981 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0904 06:01:01.624011 1521981 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 06:01:01.624067 1521981 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-306757 minikube.k8s.io/updated_at=2025_09_04T06_01_01_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=c3fa37de45a2901b215fab008201edf72ce5a1ff minikube.k8s.io/name=addons-306757 minikube.k8s.io/primary=true
	I0904 06:01:01.631875 1521981 ops.go:34] apiserver oom_adj: -16
	I0904 06:01:01.727761 1521981 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 06:01:02.227864 1521981 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 06:01:02.728679 1521981 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 06:01:03.227930 1521981 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 06:01:03.728683 1521981 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 06:01:04.227952 1521981 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 06:01:04.728795 1521981 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 06:01:05.228109 1521981 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 06:01:05.727965 1521981 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 06:01:06.228475 1521981 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 06:01:06.298324 1521981 kubeadm.go:1105] duration metric: took 4.674376654s to wait for elevateKubeSystemPrivileges
	I0904 06:01:06.298366 1521981 kubeadm.go:394] duration metric: took 14.735732662s to StartCluster
	I0904 06:01:06.298389 1521981 settings.go:142] acquiring lock: {Name:mk2d1c8a569b62879275d6daa2b799b595d6bca2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:01:06.298535 1521981 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-1516970/kubeconfig
	I0904 06:01:06.299070 1521981 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1516970/kubeconfig: {Name:mkbb6a6dae4a65ddd44b276a5562dc0a264116a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:01:06.299313 1521981 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0904 06:01:06.299327 1521981 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0904 06:01:06.299396 1521981 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0904 06:01:06.299512 1521981 addons.go:69] Setting yakd=true in profile "addons-306757"
	I0904 06:01:06.299523 1521981 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-306757"
	I0904 06:01:06.299540 1521981 addons.go:238] Setting addon yakd=true in "addons-306757"
	I0904 06:01:06.299551 1521981 config.go:182] Loaded profile config "addons-306757": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 06:01:06.299583 1521981 host.go:66] Checking if "addons-306757" exists ...
	I0904 06:01:06.299588 1521981 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-306757"
	I0904 06:01:06.299607 1521981 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-306757"
	I0904 06:01:06.299621 1521981 host.go:66] Checking if "addons-306757" exists ...
	I0904 06:01:06.299637 1521981 addons.go:69] Setting cloud-spanner=true in profile "addons-306757"
	I0904 06:01:06.299652 1521981 addons.go:238] Setting addon cloud-spanner=true in "addons-306757"
	I0904 06:01:06.299681 1521981 host.go:66] Checking if "addons-306757" exists ...
	I0904 06:01:06.299753 1521981 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-306757"
	I0904 06:01:06.299782 1521981 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-306757"
	I0904 06:01:06.299835 1521981 host.go:66] Checking if "addons-306757" exists ...
	I0904 06:01:06.300107 1521981 cli_runner.go:164] Run: docker container inspect addons-306757 --format={{.State.Status}}
	I0904 06:01:06.300112 1521981 cli_runner.go:164] Run: docker container inspect addons-306757 --format={{.State.Status}}
	I0904 06:01:06.300194 1521981 cli_runner.go:164] Run: docker container inspect addons-306757 --format={{.State.Status}}
	I0904 06:01:06.300273 1521981 cli_runner.go:164] Run: docker container inspect addons-306757 --format={{.State.Status}}
	I0904 06:01:06.300454 1521981 addons.go:69] Setting ingress=true in profile "addons-306757"
	I0904 06:01:06.300488 1521981 addons.go:238] Setting addon ingress=true in "addons-306757"
	I0904 06:01:06.300499 1521981 addons.go:69] Setting default-storageclass=true in profile "addons-306757"
	I0904 06:01:06.300532 1521981 host.go:66] Checking if "addons-306757" exists ...
	I0904 06:01:06.300545 1521981 addons.go:69] Setting gcp-auth=true in profile "addons-306757"
	I0904 06:01:06.300558 1521981 addons.go:69] Setting ingress-dns=true in profile "addons-306757"
	I0904 06:01:06.300567 1521981 mustload.go:65] Loading cluster: addons-306757
	I0904 06:01:06.300571 1521981 addons.go:238] Setting addon ingress-dns=true in "addons-306757"
	I0904 06:01:06.300603 1521981 host.go:66] Checking if "addons-306757" exists ...
	I0904 06:01:06.300764 1521981 config.go:182] Loaded profile config "addons-306757": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 06:01:06.300992 1521981 cli_runner.go:164] Run: docker container inspect addons-306757 --format={{.State.Status}}
	I0904 06:01:06.301002 1521981 cli_runner.go:164] Run: docker container inspect addons-306757 --format={{.State.Status}}
	I0904 06:01:06.301051 1521981 cli_runner.go:164] Run: docker container inspect addons-306757 --format={{.State.Status}}
	I0904 06:01:06.299624 1521981 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-306757"
	I0904 06:01:06.301330 1521981 host.go:66] Checking if "addons-306757" exists ...
	I0904 06:01:06.301815 1521981 cli_runner.go:164] Run: docker container inspect addons-306757 --format={{.State.Status}}
	I0904 06:01:06.302175 1521981 addons.go:69] Setting registry=true in profile "addons-306757"
	I0904 06:01:06.302201 1521981 out.go:179] * Verifying Kubernetes components...
	I0904 06:01:06.302211 1521981 addons.go:238] Setting addon registry=true in "addons-306757"
	I0904 06:01:06.302250 1521981 addons.go:69] Setting registry-creds=true in profile "addons-306757"
	I0904 06:01:06.302261 1521981 addons.go:238] Setting addon registry-creds=true in "addons-306757"
	I0904 06:01:06.302278 1521981 host.go:66] Checking if "addons-306757" exists ...
	I0904 06:01:06.302730 1521981 cli_runner.go:164] Run: docker container inspect addons-306757 --format={{.State.Status}}
	I0904 06:01:06.304925 1521981 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 06:01:06.300532 1521981 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-306757"
	I0904 06:01:06.305633 1521981 cli_runner.go:164] Run: docker container inspect addons-306757 --format={{.State.Status}}
	I0904 06:01:06.306286 1521981 addons.go:69] Setting metrics-server=true in profile "addons-306757"
	I0904 06:01:06.306310 1521981 addons.go:238] Setting addon metrics-server=true in "addons-306757"
	I0904 06:01:06.306344 1521981 host.go:66] Checking if "addons-306757" exists ...
	I0904 06:01:06.300547 1521981 addons.go:69] Setting inspektor-gadget=true in profile "addons-306757"
	I0904 06:01:06.306499 1521981 addons.go:238] Setting addon inspektor-gadget=true in "addons-306757"
	I0904 06:01:06.306532 1521981 host.go:66] Checking if "addons-306757" exists ...
	I0904 06:01:06.306832 1521981 cli_runner.go:164] Run: docker container inspect addons-306757 --format={{.State.Status}}
	I0904 06:01:06.307001 1521981 cli_runner.go:164] Run: docker container inspect addons-306757 --format={{.State.Status}}
	I0904 06:01:06.302241 1521981 host.go:66] Checking if "addons-306757" exists ...
	I0904 06:01:06.307500 1521981 addons.go:69] Setting volcano=true in profile "addons-306757"
	I0904 06:01:06.307530 1521981 addons.go:238] Setting addon volcano=true in "addons-306757"
	I0904 06:01:06.307566 1521981 host.go:66] Checking if "addons-306757" exists ...
	I0904 06:01:06.307853 1521981 addons.go:69] Setting storage-provisioner=true in profile "addons-306757"
	I0904 06:01:06.307938 1521981 addons.go:238] Setting addon storage-provisioner=true in "addons-306757"
	I0904 06:01:06.307973 1521981 host.go:66] Checking if "addons-306757" exists ...
	I0904 06:01:06.308278 1521981 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-306757"
	I0904 06:01:06.308346 1521981 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-306757"
	I0904 06:01:06.308449 1521981 cli_runner.go:164] Run: docker container inspect addons-306757 --format={{.State.Status}}
	I0904 06:01:06.308788 1521981 cli_runner.go:164] Run: docker container inspect addons-306757 --format={{.State.Status}}
	I0904 06:01:06.309344 1521981 addons.go:69] Setting volumesnapshots=true in profile "addons-306757"
	I0904 06:01:06.309381 1521981 addons.go:238] Setting addon volumesnapshots=true in "addons-306757"
	I0904 06:01:06.309413 1521981 host.go:66] Checking if "addons-306757" exists ...
	I0904 06:01:06.312964 1521981 cli_runner.go:164] Run: docker container inspect addons-306757 --format={{.State.Status}}
	I0904 06:01:06.337204 1521981 cli_runner.go:164] Run: docker container inspect addons-306757 --format={{.State.Status}}
	I0904 06:01:06.337811 1521981 cli_runner.go:164] Run: docker container inspect addons-306757 --format={{.State.Status}}
	I0904 06:01:06.339110 1521981 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0904 06:01:06.341977 1521981 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0904 06:01:06.342000 1521981 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0904 06:01:06.342058 1521981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306757
	I0904 06:01:06.342087 1521981 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0904 06:01:06.343805 1521981 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0904 06:01:06.343829 1521981 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0904 06:01:06.343886 1521981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306757
	I0904 06:01:06.344677 1521981 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0904 06:01:06.346493 1521981 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0904 06:01:06.347843 1521981 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0904 06:01:06.348010 1521981 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0904 06:01:06.348030 1521981 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0904 06:01:06.348084 1521981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306757
	I0904 06:01:06.351155 1521981 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0904 06:01:06.351227 1521981 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0904 06:01:06.353377 1521981 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0904 06:01:06.355140 1521981 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0904 06:01:06.356552 1521981 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0904 06:01:06.356690 1521981 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0904 06:01:06.357406 1521981 host.go:66] Checking if "addons-306757" exists ...
	I0904 06:01:06.358077 1521981 addons.go:238] Setting addon default-storageclass=true in "addons-306757"
	I0904 06:01:06.358147 1521981 host.go:66] Checking if "addons-306757" exists ...
	I0904 06:01:06.358637 1521981 cli_runner.go:164] Run: docker container inspect addons-306757 --format={{.State.Status}}
	I0904 06:01:06.359073 1521981 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0904 06:01:06.359134 1521981 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0904 06:01:06.360286 1521981 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0904 06:01:06.360310 1521981 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0904 06:01:06.360376 1521981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306757
	I0904 06:01:06.360587 1521981 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0904 06:01:06.361579 1521981 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0904 06:01:06.361602 1521981 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0904 06:01:06.361656 1521981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306757
	I0904 06:01:06.375606 1521981 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.40
	I0904 06:01:06.376791 1521981 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0904 06:01:06.376815 1521981 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0904 06:01:06.376881 1521981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306757
	I0904 06:01:06.394497 1521981 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.0
	I0904 06:01:06.395714 1521981 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0904 06:01:06.395746 1521981 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0904 06:01:06.395845 1521981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306757
	I0904 06:01:06.400695 1521981 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0904 06:01:06.401768 1521981 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0904 06:01:06.401793 1521981 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0904 06:01:06.401860 1521981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306757
	I0904 06:01:06.404580 1521981 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0904 06:01:06.406420 1521981 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0904 06:01:06.406445 1521981 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0904 06:01:06.406507 1521981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306757
	I0904 06:01:06.410027 1521981 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-306757"
	I0904 06:01:06.410079 1521981 host.go:66] Checking if "addons-306757" exists ...
	I0904 06:01:06.410542 1521981 cli_runner.go:164] Run: docker container inspect addons-306757 --format={{.State.Status}}
	I0904 06:01:06.410772 1521981 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0904 06:01:06.413117 1521981 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0904 06:01:06.413138 1521981 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0904 06:01:06.413198 1521981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306757
	I0904 06:01:06.424440 1521981 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0904 06:01:06.425657 1521981 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0904 06:01:06.425692 1521981 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0904 06:01:06.425755 1521981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306757
	I0904 06:01:06.428856 1521981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33959 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/addons-306757/id_rsa Username:docker}
	I0904 06:01:06.432795 1521981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33959 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/addons-306757/id_rsa Username:docker}
	I0904 06:01:06.435938 1521981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33959 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/addons-306757/id_rsa Username:docker}
	I0904 06:01:06.436075 1521981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33959 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/addons-306757/id_rsa Username:docker}
	I0904 06:01:06.437375 1521981 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0904 06:01:06.438763 1521981 out.go:179]   - Using image docker.io/registry:3.0.0
	I0904 06:01:06.442135 1521981 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0904 06:01:06.442154 1521981 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0904 06:01:06.442214 1521981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306757
	I0904 06:01:06.442536 1521981 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0904 06:01:06.442551 1521981 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0904 06:01:06.442601 1521981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306757
	I0904 06:01:06.443046 1521981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33959 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/addons-306757/id_rsa Username:docker}
	W0904 06:01:06.448017 1521981 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0904 06:01:06.458338 1521981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33959 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/addons-306757/id_rsa Username:docker}
	I0904 06:01:06.460939 1521981 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 06:01:06.462091 1521981 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 06:01:06.462110 1521981 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0904 06:01:06.462178 1521981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306757
	I0904 06:01:06.474276 1521981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33959 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/addons-306757/id_rsa Username:docker}
	I0904 06:01:06.475310 1521981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33959 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/addons-306757/id_rsa Username:docker}
	I0904 06:01:06.477679 1521981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33959 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/addons-306757/id_rsa Username:docker}
	I0904 06:01:06.477931 1521981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33959 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/addons-306757/id_rsa Username:docker}
	I0904 06:01:06.478517 1521981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33959 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/addons-306757/id_rsa Username:docker}
	I0904 06:01:06.480240 1521981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33959 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/addons-306757/id_rsa Username:docker}
	I0904 06:01:06.481611 1521981 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0904 06:01:06.483036 1521981 out.go:179]   - Using image docker.io/busybox:stable
	I0904 06:01:06.484342 1521981 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0904 06:01:06.484365 1521981 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0904 06:01:06.484424 1521981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306757
	I0904 06:01:06.488581 1521981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33959 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/addons-306757/id_rsa Username:docker}
	I0904 06:01:06.493125 1521981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33959 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/addons-306757/id_rsa Username:docker}
	I0904 06:01:06.510264 1521981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33959 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/addons-306757/id_rsa Username:docker}
	W0904 06:01:06.511429 1521981 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0904 06:01:06.511464 1521981 retry.go:31] will retry after 337.11798ms: ssh: handshake failed: EOF
	I0904 06:01:06.704351 1521981 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0904 06:01:06.704454 1521981 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 06:01:06.807097 1521981 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0904 06:01:06.811786 1521981 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0904 06:01:06.820100 1521981 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0904 06:01:06.901456 1521981 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0904 06:01:06.901554 1521981 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0904 06:01:06.909674 1521981 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 06:01:06.924471 1521981 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0904 06:01:07.011820 1521981 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0904 06:01:07.102414 1521981 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0904 06:01:07.102454 1521981 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0904 06:01:07.103014 1521981 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0904 06:01:07.103037 1521981 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0904 06:01:07.111066 1521981 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0904 06:01:07.120020 1521981 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0904 06:01:07.120137 1521981 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0904 06:01:07.203930 1521981 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0904 06:01:07.203960 1521981 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0904 06:01:07.209281 1521981 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0904 06:01:07.209364 1521981 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0904 06:01:07.220306 1521981 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0904 06:01:07.220393 1521981 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0904 06:01:07.220993 1521981 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0904 06:01:07.420238 1521981 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0904 06:01:07.420339 1521981 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0904 06:01:07.508891 1521981 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0904 06:01:07.515963 1521981 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0904 06:01:07.522548 1521981 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0904 06:01:07.522630 1521981 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0904 06:01:07.623734 1521981 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0904 06:01:07.623837 1521981 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0904 06:01:07.716521 1521981 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 06:01:07.801650 1521981 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0904 06:01:07.801773 1521981 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0904 06:01:07.812776 1521981 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0904 06:01:07.812863 1521981 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0904 06:01:08.016072 1521981 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0904 06:01:08.016181 1521981 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0904 06:01:08.116583 1521981 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0904 06:01:08.116683 1521981 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0904 06:01:08.214253 1521981 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0904 06:01:08.214345 1521981 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0904 06:01:08.313382 1521981 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0904 06:01:08.313469 1521981 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0904 06:01:08.319838 1521981 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0904 06:01:08.319870 1521981 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0904 06:01:08.507197 1521981 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0904 06:01:08.701407 1521981 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.996921362s)
	I0904 06:01:08.701545 1521981 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.997154998s)
	I0904 06:01:08.702365 1521981 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0904 06:01:08.709665 1521981 node_ready.go:35] waiting up to 6m0s for node "addons-306757" to be "Ready" ...
	I0904 06:01:08.723314 1521981 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0904 06:01:08.723339 1521981 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0904 06:01:08.807487 1521981 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0904 06:01:08.807518 1521981 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0904 06:01:08.921877 1521981 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0904 06:01:09.101585 1521981 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0904 06:01:09.308881 1521981 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-306757" context rescaled to 1 replicas
	I0904 06:01:09.317639 1521981 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0904 06:01:09.317727 1521981 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0904 06:01:09.416849 1521981 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0904 06:01:09.416926 1521981 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0904 06:01:09.810121 1521981 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0904 06:01:09.810226 1521981 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0904 06:01:10.302289 1521981 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0904 06:01:10.302322 1521981 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0904 06:01:10.423090 1521981 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0904 06:01:10.423117 1521981 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0904 06:01:10.619745 1521981 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0904 06:01:10.619893 1521981 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	W0904 06:01:10.724283 1521981 node_ready.go:57] node "addons-306757" has "Ready":"False" status (will retry)
	I0904 06:01:10.802692 1521981 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0904 06:01:12.214318 1521981 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.40717199s)
	I0904 06:01:12.214366 1521981 addons.go:479] Verifying addon ingress=true in "addons-306757"
	I0904 06:01:12.214395 1521981 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.402553066s)
	I0904 06:01:12.214483 1521981 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.394357673s)
	I0904 06:01:12.214563 1521981 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.304864155s)
	I0904 06:01:12.214611 1521981 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.290105371s)
	I0904 06:01:12.214949 1521981 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.203093258s)
	I0904 06:01:12.215035 1521981 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.10394068s)
	I0904 06:01:12.215089 1521981 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.994031384s)
	I0904 06:01:12.215163 1521981 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.706186016s)
	I0904 06:01:12.215352 1521981 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.699296437s)
	I0904 06:01:12.215400 1521981 addons.go:479] Verifying addon registry=true in "addons-306757"
	I0904 06:01:12.215672 1521981 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.499045333s)
	W0904 06:01:12.215752 1521981 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 06:01:12.215823 1521981 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.708498453s)
	I0904 06:01:12.215860 1521981 addons.go:479] Verifying addon metrics-server=true in "addons-306757"
	I0904 06:01:12.215833 1521981 retry.go:31] will retry after 295.188681ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 06:01:12.215898 1521981 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.293915254s)
	I0904 06:01:12.216945 1521981 out.go:179] * Verifying registry addon...
	I0904 06:01:12.216945 1521981 out.go:179] * Verifying ingress addon...
	I0904 06:01:12.216944 1521981 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-306757 service yakd-dashboard -n yakd-dashboard
	
	I0904 06:01:12.219103 1521981 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0904 06:01:12.219192 1521981 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0904 06:01:12.222806 1521981 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0904 06:01:12.222828 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:12.222939 1521981 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0904 06:01:12.222960 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 06:01:12.226784 1521981 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0904 06:01:12.511865 1521981 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 06:01:12.723631 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:12.723864 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 06:01:13.217536 1521981 node_ready.go:57] node "addons-306757" has "Ready":"False" status (will retry)
	I0904 06:01:13.225189 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:13.225351 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:13.429635 1521981 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.32791417s)
	W0904 06:01:13.429695 1521981 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0904 06:01:13.429755 1521981 retry.go:31] will retry after 140.794086ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0904 06:01:13.429893 1521981 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.627155628s)
	I0904 06:01:13.429926 1521981 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-306757"
	I0904 06:01:13.432337 1521981 out.go:179] * Verifying csi-hostpath-driver addon...
	I0904 06:01:13.434028 1521981 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0904 06:01:13.436462 1521981 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0904 06:01:13.436484 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:13.571250 1521981 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0904 06:01:13.722197 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:13.722239 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:13.937765 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:13.955490 1521981 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.443585027s)
	W0904 06:01:13.955532 1521981 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 06:01:13.955551 1521981 retry.go:31] will retry after 214.317433ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 06:01:13.964053 1521981 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0904 06:01:13.964118 1521981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306757
	I0904 06:01:13.981477 1521981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33959 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/addons-306757/id_rsa Username:docker}
	I0904 06:01:14.097329 1521981 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0904 06:01:14.113678 1521981 addons.go:238] Setting addon gcp-auth=true in "addons-306757"
	I0904 06:01:14.113739 1521981 host.go:66] Checking if "addons-306757" exists ...
	I0904 06:01:14.114194 1521981 cli_runner.go:164] Run: docker container inspect addons-306757 --format={{.State.Status}}
	I0904 06:01:14.131847 1521981 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0904 06:01:14.131935 1521981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-306757
	I0904 06:01:14.149046 1521981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33959 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/addons-306757/id_rsa Username:docker}
	I0904 06:01:14.170358 1521981 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 06:01:14.224837 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:14.224890 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:14.437281 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:14.722083 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:14.722198 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:14.936822 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:15.222061 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:15.222130 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:15.436786 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0904 06:01:15.712269 1521981 node_ready.go:57] node "addons-306757" has "Ready":"False" status (will retry)
	I0904 06:01:15.722038 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:15.722091 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:15.936756 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:16.153657 1521981 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.582367084s)
	I0904 06:01:16.153790 1521981 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.983390716s)
	I0904 06:01:16.153827 1521981 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.02194999s)
	W0904 06:01:16.153834 1521981 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 06:01:16.153856 1521981 retry.go:31] will retry after 607.888893ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 06:01:16.155710 1521981 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0904 06:01:16.157198 1521981 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0904 06:01:16.158486 1521981 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0904 06:01:16.158508 1521981 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0904 06:01:16.175935 1521981 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0904 06:01:16.175959 1521981 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0904 06:01:16.192211 1521981 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0904 06:01:16.192234 1521981 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0904 06:01:16.208378 1521981 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0904 06:01:16.222411 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:16.222518 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:16.437367 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:16.519262 1521981 addons.go:479] Verifying addon gcp-auth=true in "addons-306757"
	I0904 06:01:16.521323 1521981 out.go:179] * Verifying gcp-auth addon...
	I0904 06:01:16.523015 1521981 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0904 06:01:16.525620 1521981 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0904 06:01:16.525641 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:16.722246 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:16.722441 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:16.762589 1521981 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 06:01:16.937789 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:17.026372 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:17.222572 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:17.222595 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 06:01:17.290906 1521981 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 06:01:17.290945 1521981 retry.go:31] will retry after 1.255538609s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 06:01:17.437655 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:17.526423 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0904 06:01:17.712867 1521981 node_ready.go:57] node "addons-306757" has "Ready":"False" status (will retry)
	I0904 06:01:17.722724 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:17.722812 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:17.937675 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:18.026398 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:18.222590 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:18.222809 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:18.437584 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:18.526275 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:18.547321 1521981 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 06:01:18.721960 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:18.722062 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:18.937263 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:19.026322 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0904 06:01:19.085897 1521981 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 06:01:19.085934 1521981 retry.go:31] will retry after 1.074491437s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 06:01:19.222060 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:19.222210 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:19.436953 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:19.526764 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:19.722603 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:19.722685 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:19.937693 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:20.026707 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:20.160859 1521981 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0904 06:01:20.213511 1521981 node_ready.go:57] node "addons-306757" has "Ready":"False" status (will retry)
	I0904 06:01:20.222094 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:20.222358 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:20.437536 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:20.526248 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0904 06:01:20.698410 1521981 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 06:01:20.698458 1521981 retry.go:31] will retry after 2.446129928s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 06:01:20.722452 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:20.722708 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:20.937219 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:21.026046 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:21.222765 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:21.222825 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:21.437656 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:21.526520 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:21.722069 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:21.722108 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:21.937021 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:22.027212 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:22.222557 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:22.222669 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:22.437447 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:22.526430 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0904 06:01:22.713087 1521981 node_ready.go:57] node "addons-306757" has "Ready":"False" status (will retry)
	I0904 06:01:22.722652 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:22.722749 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:22.937418 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:23.026466 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:23.145757 1521981 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 06:01:23.222477 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:23.222539 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:23.437347 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:23.526389 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0904 06:01:23.691532 1521981 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 06:01:23.691563 1521981 retry.go:31] will retry after 3.304894819s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 06:01:23.722183 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:23.722230 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:23.937071 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:24.026998 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:24.221976 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:24.222151 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:24.436814 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:24.526700 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0904 06:01:24.713362 1521981 node_ready.go:57] node "addons-306757" has "Ready":"False" status (will retry)
	I0904 06:01:24.721862 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:24.721928 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:24.937616 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:25.026466 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:25.221890 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:25.222154 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:25.436972 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:25.526854 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:25.722875 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:25.722935 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:25.936891 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:26.026622 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:26.222177 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:26.222283 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:26.437295 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:26.526274 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:26.721687 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:26.721749 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:26.937604 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:26.996647 1521981 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 06:01:27.027010 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0904 06:01:27.212763 1521981 node_ready.go:57] node "addons-306757" has "Ready":"False" status (will retry)
	I0904 06:01:27.222384 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:27.222510 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:27.437394 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:27.526150 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0904 06:01:27.542275 1521981 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 06:01:27.542314 1521981 retry.go:31] will retry after 3.035780017s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 06:01:27.722500 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:27.722659 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:27.937979 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:28.027157 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:28.222033 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:28.222239 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:28.437690 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:28.526161 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:28.722239 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:28.722383 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:28.937166 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:29.025671 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0904 06:01:29.213448 1521981 node_ready.go:57] node "addons-306757" has "Ready":"False" status (will retry)
	I0904 06:01:29.222370 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:29.222581 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:29.437255 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:29.525943 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:29.721820 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:29.721881 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:29.937899 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:30.026673 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:30.222344 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:30.222499 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:30.437168 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:30.525964 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:30.579092 1521981 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 06:01:30.722836 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:30.722848 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:30.937602 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:31.026234 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0904 06:01:31.130103 1521981 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 06:01:31.130136 1521981 retry.go:31] will retry after 8.568651419s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 06:01:31.222396 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:31.222578 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:31.437809 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:31.526623 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0904 06:01:31.713305 1521981 node_ready.go:57] node "addons-306757" has "Ready":"False" status (will retry)
	I0904 06:01:31.721966 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:31.722167 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:31.936745 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:32.026698 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:32.222013 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:32.222273 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:32.437392 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:32.526159 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:32.722499 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:32.722626 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:32.937489 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:33.026349 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:33.221514 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:33.221710 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:33.437486 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:33.526288 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:33.722509 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:33.722712 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:33.937502 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:34.026396 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0904 06:01:34.213194 1521981 node_ready.go:57] node "addons-306757" has "Ready":"False" status (will retry)
	I0904 06:01:34.222571 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:34.222777 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:34.437501 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:34.526093 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:34.722548 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:34.722687 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:34.937588 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:35.026583 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:35.221639 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:35.221763 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:35.437703 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:35.526521 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:35.722613 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:35.722754 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:35.937338 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:36.026149 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:36.222472 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:36.222574 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:36.437539 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:36.526221 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0904 06:01:36.712978 1521981 node_ready.go:57] node "addons-306757" has "Ready":"False" status (will retry)
	I0904 06:01:36.722492 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:36.722537 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:36.937196 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:37.026106 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:37.222381 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:37.222425 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:37.437426 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:37.526258 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:37.722976 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:37.722984 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:37.937840 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:38.026686 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:38.222109 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:38.222252 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:38.437106 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:38.526905 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:38.722060 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:38.722112 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:38.936895 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:39.026700 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0904 06:01:39.213484 1521981 node_ready.go:57] node "addons-306757" has "Ready":"False" status (will retry)
	I0904 06:01:39.222248 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:39.222408 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:39.437206 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:39.526243 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:39.699440 1521981 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 06:01:39.722664 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:39.722715 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:39.937267 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:40.026888 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:40.222063 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:40.222197 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 06:01:40.233796 1521981 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 06:01:40.233838 1521981 retry.go:31] will retry after 7.308333615s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 06:01:40.437898 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:40.526574 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:40.722123 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:40.722339 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:40.937107 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:41.025830 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0904 06:01:41.213593 1521981 node_ready.go:57] node "addons-306757" has "Ready":"False" status (will retry)
	I0904 06:01:41.222168 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:41.222282 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:41.436845 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:41.526583 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:41.721953 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:41.722051 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:41.937120 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:42.025845 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:42.221960 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:42.222128 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:42.437040 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:42.526840 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:42.722346 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:42.722411 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:42.937369 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:43.026182 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:43.222336 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:43.222495 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:43.437274 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:43.526217 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0904 06:01:43.712926 1521981 node_ready.go:57] node "addons-306757" has "Ready":"False" status (will retry)
	I0904 06:01:43.722332 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:43.722504 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:43.937184 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:44.025856 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:44.222696 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:44.222781 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:44.437659 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:44.526772 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:44.722193 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:44.722235 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:44.936888 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:45.026634 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:45.222097 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:45.222282 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:45.437056 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:45.526619 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0904 06:01:45.713354 1521981 node_ready.go:57] node "addons-306757" has "Ready":"False" status (will retry)
	I0904 06:01:45.721612 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:45.721750 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:45.937776 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:46.026385 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:46.221511 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:46.221584 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:46.437291 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:46.525879 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:46.722339 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:46.722453 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:46.937400 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:47.026310 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:47.222461 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:47.222524 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:47.437554 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:47.526205 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:47.543268 1521981 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 06:01:47.721707 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:47.721882 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:47.937826 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:48.026350 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0904 06:01:48.080612 1521981 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 06:01:48.080643 1521981 retry.go:31] will retry after 19.325756649s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0904 06:01:48.213295 1521981 node_ready.go:57] node "addons-306757" has "Ready":"False" status (will retry)
	I0904 06:01:48.221652 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:48.221752 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:48.437807 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:48.526726 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:48.721816 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:48.721939 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:48.937930 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:49.026814 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:49.222663 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:49.222717 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:49.437698 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:49.526620 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:49.722347 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:49.722506 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:49.937189 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:50.025886 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:50.213233 1521981 node_ready.go:49] node "addons-306757" is "Ready"
	I0904 06:01:50.213275 1521981 node_ready.go:38] duration metric: took 41.503362959s for node "addons-306757" to be "Ready" ...
	I0904 06:01:50.213304 1521981 api_server.go:52] waiting for apiserver process to appear ...
	I0904 06:01:50.213368 1521981 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 06:01:50.227147 1521981 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0904 06:01:50.227178 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:50.227538 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:50.301591 1521981 api_server.go:72] duration metric: took 44.002228333s to wait for apiserver process to appear ...
	I0904 06:01:50.301685 1521981 api_server.go:88] waiting for apiserver healthz status ...
	I0904 06:01:50.301723 1521981 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0904 06:01:50.306262 1521981 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0904 06:01:50.307130 1521981 api_server.go:141] control plane version: v1.34.0
	I0904 06:01:50.307157 1521981 api_server.go:131] duration metric: took 5.452254ms to wait for apiserver health ...
	I0904 06:01:50.307165 1521981 system_pods.go:43] waiting for kube-system pods to appear ...
	I0904 06:01:50.317257 1521981 system_pods.go:59] 20 kube-system pods found
	I0904 06:01:50.317297 1521981 system_pods.go:61] "amd-gpu-device-plugin-rp9pp" [bca304f8-9027-4298-bd42-61a669d3e210] Pending
	I0904 06:01:50.317310 1521981 system_pods.go:61] "coredns-66bc5c9577-wgmn5" [51d30edf-8076-47d5-9e23-3df8e6190b67] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 06:01:50.317317 1521981 system_pods.go:61] "csi-hostpath-attacher-0" [c73455d9-9918-415d-af35-97bf6b170f6c] Pending
	I0904 06:01:50.317328 1521981 system_pods.go:61] "csi-hostpath-resizer-0" [3a2123fe-3e84-4560-97df-bd9e35374e0f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0904 06:01:50.317333 1521981 system_pods.go:61] "csi-hostpathplugin-pb9h9" [da16d9fa-7de3-413d-a94f-29189f5d04a0] Pending
	I0904 06:01:50.317338 1521981 system_pods.go:61] "etcd-addons-306757" [51ef0edf-2064-4479-926e-d6077b4822c1] Running
	I0904 06:01:50.317343 1521981 system_pods.go:61] "kindnet-d697q" [3008e34d-5c94-4a7c-b5f7-d8b170b89284] Running
	I0904 06:01:50.317348 1521981 system_pods.go:61] "kube-apiserver-addons-306757" [265b4b76-2edc-47a2-8b7a-2129ac665bf2] Running
	I0904 06:01:50.317352 1521981 system_pods.go:61] "kube-controller-manager-addons-306757" [969359b8-dcf4-434e-b3b5-e3ae2e66c5e2] Running
	I0904 06:01:50.317364 1521981 system_pods.go:61] "kube-ingress-dns-minikube" [7b86dc0c-ae37-420e-affb-4b359da463a2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0904 06:01:50.317369 1521981 system_pods.go:61] "kube-proxy-wmldx" [58f84b40-ffc0-47a4-a789-7df52cc2ed11] Running
	I0904 06:01:50.317379 1521981 system_pods.go:61] "kube-scheduler-addons-306757" [d94d296c-40ed-42bd-8db1-6c5ee69d47bc] Running
	I0904 06:01:50.317384 1521981 system_pods.go:61] "metrics-server-85b7d694d7-fclpw" [606933e4-ec1f-4aa3-9826-a2f054695f6a] Pending
	I0904 06:01:50.317394 1521981 system_pods.go:61] "nvidia-device-plugin-daemonset-qljm9" [8e7ef4b6-e9c1-42de-adf1-b264f8fd5ce2] Pending
	I0904 06:01:50.317402 1521981 system_pods.go:61] "registry-66898fdd98-s8qqg" [8143b624-da88-4323-8441-706602e975b8] Pending
	I0904 06:01:50.317406 1521981 system_pods.go:61] "registry-creds-764b6fb674-pqjpl" [e51cc19d-c9d1-4ba8-b161-514f39bbc7cf] Pending
	I0904 06:01:50.317413 1521981 system_pods.go:61] "registry-proxy-hklmr" [d5eee02a-bf3e-4376-a820-fe7cb6e83409] Pending
	I0904 06:01:50.317418 1521981 system_pods.go:61] "snapshot-controller-7d9fbc56b8-l7r9k" [3a0bd30f-bb22-480d-9418-fee1bf541833] Pending
	I0904 06:01:50.317429 1521981 system_pods.go:61] "snapshot-controller-7d9fbc56b8-rjd4r" [09d97ff6-5caa-4ed9-b992-64b718738d89] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0904 06:01:50.317442 1521981 system_pods.go:61] "storage-provisioner" [7a77f8a2-634f-45bd-ac55-74e091a2cc01] Pending
	I0904 06:01:50.317451 1521981 system_pods.go:74] duration metric: took 10.27926ms to wait for pod list to return data ...
	I0904 06:01:50.317465 1521981 default_sa.go:34] waiting for default service account to be created ...
	I0904 06:01:50.319660 1521981 default_sa.go:45] found service account: "default"
	I0904 06:01:50.319688 1521981 default_sa.go:55] duration metric: took 2.214464ms for default service account to be created ...
	I0904 06:01:50.319699 1521981 system_pods.go:116] waiting for k8s-apps to be running ...
	I0904 06:01:50.327267 1521981 system_pods.go:86] 20 kube-system pods found
	I0904 06:01:50.327298 1521981 system_pods.go:89] "amd-gpu-device-plugin-rp9pp" [bca304f8-9027-4298-bd42-61a669d3e210] Pending
	I0904 06:01:50.327315 1521981 system_pods.go:89] "coredns-66bc5c9577-wgmn5" [51d30edf-8076-47d5-9e23-3df8e6190b67] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 06:01:50.327321 1521981 system_pods.go:89] "csi-hostpath-attacher-0" [c73455d9-9918-415d-af35-97bf6b170f6c] Pending
	I0904 06:01:50.327331 1521981 system_pods.go:89] "csi-hostpath-resizer-0" [3a2123fe-3e84-4560-97df-bd9e35374e0f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0904 06:01:50.327337 1521981 system_pods.go:89] "csi-hostpathplugin-pb9h9" [da16d9fa-7de3-413d-a94f-29189f5d04a0] Pending
	I0904 06:01:50.327342 1521981 system_pods.go:89] "etcd-addons-306757" [51ef0edf-2064-4479-926e-d6077b4822c1] Running
	I0904 06:01:50.327348 1521981 system_pods.go:89] "kindnet-d697q" [3008e34d-5c94-4a7c-b5f7-d8b170b89284] Running
	I0904 06:01:50.327362 1521981 system_pods.go:89] "kube-apiserver-addons-306757" [265b4b76-2edc-47a2-8b7a-2129ac665bf2] Running
	I0904 06:01:50.327368 1521981 system_pods.go:89] "kube-controller-manager-addons-306757" [969359b8-dcf4-434e-b3b5-e3ae2e66c5e2] Running
	I0904 06:01:50.327376 1521981 system_pods.go:89] "kube-ingress-dns-minikube" [7b86dc0c-ae37-420e-affb-4b359da463a2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0904 06:01:50.327381 1521981 system_pods.go:89] "kube-proxy-wmldx" [58f84b40-ffc0-47a4-a789-7df52cc2ed11] Running
	I0904 06:01:50.327386 1521981 system_pods.go:89] "kube-scheduler-addons-306757" [d94d296c-40ed-42bd-8db1-6c5ee69d47bc] Running
	I0904 06:01:50.327391 1521981 system_pods.go:89] "metrics-server-85b7d694d7-fclpw" [606933e4-ec1f-4aa3-9826-a2f054695f6a] Pending
	I0904 06:01:50.327396 1521981 system_pods.go:89] "nvidia-device-plugin-daemonset-qljm9" [8e7ef4b6-e9c1-42de-adf1-b264f8fd5ce2] Pending
	I0904 06:01:50.327401 1521981 system_pods.go:89] "registry-66898fdd98-s8qqg" [8143b624-da88-4323-8441-706602e975b8] Pending
	I0904 06:01:50.327407 1521981 system_pods.go:89] "registry-creds-764b6fb674-pqjpl" [e51cc19d-c9d1-4ba8-b161-514f39bbc7cf] Pending
	I0904 06:01:50.327414 1521981 system_pods.go:89] "registry-proxy-hklmr" [d5eee02a-bf3e-4376-a820-fe7cb6e83409] Pending
	I0904 06:01:50.327421 1521981 system_pods.go:89] "snapshot-controller-7d9fbc56b8-l7r9k" [3a0bd30f-bb22-480d-9418-fee1bf541833] Pending
	I0904 06:01:50.327430 1521981 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rjd4r" [09d97ff6-5caa-4ed9-b992-64b718738d89] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0904 06:01:50.327448 1521981 system_pods.go:89] "storage-provisioner" [7a77f8a2-634f-45bd-ac55-74e091a2cc01] Pending
	I0904 06:01:50.327471 1521981 retry.go:31] will retry after 208.865657ms: missing components: kube-dns
	I0904 06:01:50.438500 1521981 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0904 06:01:50.438532 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:50.602788 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:50.610458 1521981 system_pods.go:86] 20 kube-system pods found
	I0904 06:01:50.610499 1521981 system_pods.go:89] "amd-gpu-device-plugin-rp9pp" [bca304f8-9027-4298-bd42-61a669d3e210] Pending
	I0904 06:01:50.610509 1521981 system_pods.go:89] "coredns-66bc5c9577-wgmn5" [51d30edf-8076-47d5-9e23-3df8e6190b67] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 06:01:50.610515 1521981 system_pods.go:89] "csi-hostpath-attacher-0" [c73455d9-9918-415d-af35-97bf6b170f6c] Pending
	I0904 06:01:50.610522 1521981 system_pods.go:89] "csi-hostpath-resizer-0" [3a2123fe-3e84-4560-97df-bd9e35374e0f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0904 06:01:50.610528 1521981 system_pods.go:89] "csi-hostpathplugin-pb9h9" [da16d9fa-7de3-413d-a94f-29189f5d04a0] Pending
	I0904 06:01:50.610534 1521981 system_pods.go:89] "etcd-addons-306757" [51ef0edf-2064-4479-926e-d6077b4822c1] Running
	I0904 06:01:50.610540 1521981 system_pods.go:89] "kindnet-d697q" [3008e34d-5c94-4a7c-b5f7-d8b170b89284] Running
	I0904 06:01:50.610546 1521981 system_pods.go:89] "kube-apiserver-addons-306757" [265b4b76-2edc-47a2-8b7a-2129ac665bf2] Running
	I0904 06:01:50.610556 1521981 system_pods.go:89] "kube-controller-manager-addons-306757" [969359b8-dcf4-434e-b3b5-e3ae2e66c5e2] Running
	I0904 06:01:50.610568 1521981 system_pods.go:89] "kube-ingress-dns-minikube" [7b86dc0c-ae37-420e-affb-4b359da463a2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0904 06:01:50.610578 1521981 system_pods.go:89] "kube-proxy-wmldx" [58f84b40-ffc0-47a4-a789-7df52cc2ed11] Running
	I0904 06:01:50.610584 1521981 system_pods.go:89] "kube-scheduler-addons-306757" [d94d296c-40ed-42bd-8db1-6c5ee69d47bc] Running
	I0904 06:01:50.610589 1521981 system_pods.go:89] "metrics-server-85b7d694d7-fclpw" [606933e4-ec1f-4aa3-9826-a2f054695f6a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 06:01:50.610598 1521981 system_pods.go:89] "nvidia-device-plugin-daemonset-qljm9" [8e7ef4b6-e9c1-42de-adf1-b264f8fd5ce2] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0904 06:01:50.610605 1521981 system_pods.go:89] "registry-66898fdd98-s8qqg" [8143b624-da88-4323-8441-706602e975b8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0904 06:01:50.610610 1521981 system_pods.go:89] "registry-creds-764b6fb674-pqjpl" [e51cc19d-c9d1-4ba8-b161-514f39bbc7cf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0904 06:01:50.610620 1521981 system_pods.go:89] "registry-proxy-hklmr" [d5eee02a-bf3e-4376-a820-fe7cb6e83409] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0904 06:01:50.610632 1521981 system_pods.go:89] "snapshot-controller-7d9fbc56b8-l7r9k" [3a0bd30f-bb22-480d-9418-fee1bf541833] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0904 06:01:50.610645 1521981 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rjd4r" [09d97ff6-5caa-4ed9-b992-64b718738d89] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0904 06:01:50.610656 1521981 system_pods.go:89] "storage-provisioner" [7a77f8a2-634f-45bd-ac55-74e091a2cc01] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0904 06:01:50.610676 1521981 retry.go:31] will retry after 319.340978ms: missing components: kube-dns
	I0904 06:01:50.722811 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:50.722927 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:50.935847 1521981 system_pods.go:86] 20 kube-system pods found
	I0904 06:01:50.935946 1521981 system_pods.go:89] "amd-gpu-device-plugin-rp9pp" [bca304f8-9027-4298-bd42-61a669d3e210] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0904 06:01:50.935973 1521981 system_pods.go:89] "coredns-66bc5c9577-wgmn5" [51d30edf-8076-47d5-9e23-3df8e6190b67] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 06:01:50.936013 1521981 system_pods.go:89] "csi-hostpath-attacher-0" [c73455d9-9918-415d-af35-97bf6b170f6c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0904 06:01:50.936040 1521981 system_pods.go:89] "csi-hostpath-resizer-0" [3a2123fe-3e84-4560-97df-bd9e35374e0f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0904 06:01:50.936133 1521981 system_pods.go:89] "csi-hostpathplugin-pb9h9" [da16d9fa-7de3-413d-a94f-29189f5d04a0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0904 06:01:50.936159 1521981 system_pods.go:89] "etcd-addons-306757" [51ef0edf-2064-4479-926e-d6077b4822c1] Running
	I0904 06:01:50.936175 1521981 system_pods.go:89] "kindnet-d697q" [3008e34d-5c94-4a7c-b5f7-d8b170b89284] Running
	I0904 06:01:50.936190 1521981 system_pods.go:89] "kube-apiserver-addons-306757" [265b4b76-2edc-47a2-8b7a-2129ac665bf2] Running
	I0904 06:01:50.936200 1521981 system_pods.go:89] "kube-controller-manager-addons-306757" [969359b8-dcf4-434e-b3b5-e3ae2e66c5e2] Running
	I0904 06:01:50.936208 1521981 system_pods.go:89] "kube-ingress-dns-minikube" [7b86dc0c-ae37-420e-affb-4b359da463a2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0904 06:01:50.936213 1521981 system_pods.go:89] "kube-proxy-wmldx" [58f84b40-ffc0-47a4-a789-7df52cc2ed11] Running
	I0904 06:01:50.936219 1521981 system_pods.go:89] "kube-scheduler-addons-306757" [d94d296c-40ed-42bd-8db1-6c5ee69d47bc] Running
	I0904 06:01:50.936241 1521981 system_pods.go:89] "metrics-server-85b7d694d7-fclpw" [606933e4-ec1f-4aa3-9826-a2f054695f6a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 06:01:50.936249 1521981 system_pods.go:89] "nvidia-device-plugin-daemonset-qljm9" [8e7ef4b6-e9c1-42de-adf1-b264f8fd5ce2] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0904 06:01:50.936274 1521981 system_pods.go:89] "registry-66898fdd98-s8qqg" [8143b624-da88-4323-8441-706602e975b8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0904 06:01:50.936287 1521981 system_pods.go:89] "registry-creds-764b6fb674-pqjpl" [e51cc19d-c9d1-4ba8-b161-514f39bbc7cf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0904 06:01:50.936295 1521981 system_pods.go:89] "registry-proxy-hklmr" [d5eee02a-bf3e-4376-a820-fe7cb6e83409] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0904 06:01:50.936306 1521981 system_pods.go:89] "snapshot-controller-7d9fbc56b8-l7r9k" [3a0bd30f-bb22-480d-9418-fee1bf541833] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0904 06:01:50.936319 1521981 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rjd4r" [09d97ff6-5caa-4ed9-b992-64b718738d89] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0904 06:01:50.936330 1521981 system_pods.go:89] "storage-provisioner" [7a77f8a2-634f-45bd-ac55-74e091a2cc01] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0904 06:01:50.936350 1521981 retry.go:31] will retry after 305.23352ms: missing components: kube-dns
	I0904 06:01:50.940164 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:51.031662 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:51.223658 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:51.223704 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:51.325966 1521981 system_pods.go:86] 20 kube-system pods found
	I0904 06:01:51.326001 1521981 system_pods.go:89] "amd-gpu-device-plugin-rp9pp" [bca304f8-9027-4298-bd42-61a669d3e210] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0904 06:01:51.326006 1521981 system_pods.go:89] "coredns-66bc5c9577-wgmn5" [51d30edf-8076-47d5-9e23-3df8e6190b67] Running
	I0904 06:01:51.326014 1521981 system_pods.go:89] "csi-hostpath-attacher-0" [c73455d9-9918-415d-af35-97bf6b170f6c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0904 06:01:51.326020 1521981 system_pods.go:89] "csi-hostpath-resizer-0" [3a2123fe-3e84-4560-97df-bd9e35374e0f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0904 06:01:51.326028 1521981 system_pods.go:89] "csi-hostpathplugin-pb9h9" [da16d9fa-7de3-413d-a94f-29189f5d04a0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0904 06:01:51.326035 1521981 system_pods.go:89] "etcd-addons-306757" [51ef0edf-2064-4479-926e-d6077b4822c1] Running
	I0904 06:01:51.326042 1521981 system_pods.go:89] "kindnet-d697q" [3008e34d-5c94-4a7c-b5f7-d8b170b89284] Running
	I0904 06:01:51.326050 1521981 system_pods.go:89] "kube-apiserver-addons-306757" [265b4b76-2edc-47a2-8b7a-2129ac665bf2] Running
	I0904 06:01:51.326056 1521981 system_pods.go:89] "kube-controller-manager-addons-306757" [969359b8-dcf4-434e-b3b5-e3ae2e66c5e2] Running
	I0904 06:01:51.326074 1521981 system_pods.go:89] "kube-ingress-dns-minikube" [7b86dc0c-ae37-420e-affb-4b359da463a2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0904 06:01:51.326077 1521981 system_pods.go:89] "kube-proxy-wmldx" [58f84b40-ffc0-47a4-a789-7df52cc2ed11] Running
	I0904 06:01:51.326082 1521981 system_pods.go:89] "kube-scheduler-addons-306757" [d94d296c-40ed-42bd-8db1-6c5ee69d47bc] Running
	I0904 06:01:51.326087 1521981 system_pods.go:89] "metrics-server-85b7d694d7-fclpw" [606933e4-ec1f-4aa3-9826-a2f054695f6a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 06:01:51.326096 1521981 system_pods.go:89] "nvidia-device-plugin-daemonset-qljm9" [8e7ef4b6-e9c1-42de-adf1-b264f8fd5ce2] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0904 06:01:51.326104 1521981 system_pods.go:89] "registry-66898fdd98-s8qqg" [8143b624-da88-4323-8441-706602e975b8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0904 06:01:51.326109 1521981 system_pods.go:89] "registry-creds-764b6fb674-pqjpl" [e51cc19d-c9d1-4ba8-b161-514f39bbc7cf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0904 06:01:51.326117 1521981 system_pods.go:89] "registry-proxy-hklmr" [d5eee02a-bf3e-4376-a820-fe7cb6e83409] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0904 06:01:51.326121 1521981 system_pods.go:89] "snapshot-controller-7d9fbc56b8-l7r9k" [3a0bd30f-bb22-480d-9418-fee1bf541833] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0904 06:01:51.326130 1521981 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rjd4r" [09d97ff6-5caa-4ed9-b992-64b718738d89] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0904 06:01:51.326157 1521981 system_pods.go:89] "storage-provisioner" [7a77f8a2-634f-45bd-ac55-74e091a2cc01] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0904 06:01:51.326171 1521981 system_pods.go:126] duration metric: took 1.006466529s to wait for k8s-apps to be running ...
	I0904 06:01:51.326184 1521981 system_svc.go:44] waiting for kubelet service to be running ....
	I0904 06:01:51.326236 1521981 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 06:01:51.369085 1521981 system_svc.go:56] duration metric: took 42.890497ms WaitForService to wait for kubelet
	I0904 06:01:51.369116 1521981 kubeadm.go:578] duration metric: took 45.069760909s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 06:01:51.369136 1521981 node_conditions.go:102] verifying NodePressure condition ...
	I0904 06:01:51.372080 1521981 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0904 06:01:51.372109 1521981 node_conditions.go:123] node cpu capacity is 8
	I0904 06:01:51.372123 1521981 node_conditions.go:105] duration metric: took 2.981613ms to run NodePressure ...
	I0904 06:01:51.372135 1521981 start.go:241] waiting for startup goroutines ...
	I0904 06:01:51.438149 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:51.526850 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:51.722835 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:51.723016 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:51.938235 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:52.027712 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:52.222940 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:52.223101 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:52.504797 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:52.526951 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:52.723457 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:52.723469 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:52.938086 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:53.026772 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:53.222995 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:53.223151 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:53.437461 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:53.526196 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:53.722451 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:53.722480 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:53.937877 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:54.026586 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:54.223693 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:54.223705 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:54.438015 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:54.526927 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:54.723108 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:54.723269 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:54.937645 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:55.038541 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:55.222667 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:55.222762 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:55.438219 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:55.526038 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:55.723053 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:55.723106 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:55.937094 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:56.027011 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:56.223634 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:56.223695 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:56.437953 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:56.526492 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:56.722508 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:56.722583 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:56.937994 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:57.027223 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:57.223195 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:57.223414 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:57.438177 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:57.526116 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:57.723271 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:57.723403 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:57.937708 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:58.038080 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:58.222844 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:58.223420 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:58.438181 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:58.526658 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:58.722649 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:58.722728 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:58.938256 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:59.027164 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:59.222680 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:59.222791 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:59.438243 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:01:59.527137 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:01:59.723135 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:01:59.723175 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:01:59.937911 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:00.026938 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:00.222904 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:00.222910 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:00.438405 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:00.526397 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:00.722304 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:00.722369 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:00.937601 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:01.026147 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:01.222396 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:01.222451 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:01.438513 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:01.538816 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:01.722788 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:01.722893 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:01.938553 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:02.026323 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:02.225098 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:02.225354 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:02.437732 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:02.526321 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:02.722387 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:02.722443 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:02.938192 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:03.038669 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:03.222652 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:03.222768 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:03.438538 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:03.526071 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:03.724308 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:03.724382 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:03.937697 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:04.038601 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:04.222775 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:04.222879 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:04.438319 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:04.526445 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:04.722091 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:04.722170 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:05.003051 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:05.027151 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:05.225713 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:05.303270 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:05.501362 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:05.527032 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:05.723556 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:05.723608 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:06.004041 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:06.026308 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:06.224541 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:06.225113 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:06.502247 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:06.528030 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:06.723283 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:06.723427 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:06.937663 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:07.026485 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:07.222553 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:07.222619 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:07.406882 1521981 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 06:02:07.438331 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:07.526210 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:07.723879 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:07.723915 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:07.937835 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:08.026689 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:08.223640 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:08.223683 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:08.412734 1521981 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.005803757s)
	W0904 06:02:08.412784 1521981 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 06:02:08.412810 1521981 retry.go:31] will retry after 28.473205883s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 06:02:08.438209 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:08.527225 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:08.722515 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:08.722976 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:08.937424 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:09.025991 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:09.223147 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:09.223258 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:09.438163 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:09.526713 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:09.722693 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:09.722838 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:09.938645 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:10.026613 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:10.222768 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:10.222801 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:10.437924 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:10.526656 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:10.723123 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:10.723199 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:10.937784 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:11.026714 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:11.222633 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:11.222820 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:11.437939 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:11.526358 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:11.722874 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:11.723027 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:11.937973 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:12.027286 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:12.222834 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:12.222883 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:12.438455 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:12.526395 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:12.723164 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:12.723291 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:12.937902 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:13.038466 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:13.225106 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:13.225153 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:13.437198 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:13.526835 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:13.722977 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:13.723046 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:13.937318 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:14.026145 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:14.223478 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:14.223706 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:14.438506 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:14.526170 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:14.722690 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:14.722744 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:14.938254 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:15.027248 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:15.222490 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:15.222546 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:15.438380 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:15.539325 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:15.723158 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:15.723344 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:15.937752 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:16.026761 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:16.223441 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:16.223464 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:16.438602 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:16.526631 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:16.722773 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:16.722800 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:16.938246 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:17.026895 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:17.223333 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:17.223380 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:17.438128 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:17.537983 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:17.723106 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:17.723221 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:17.937604 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:18.026514 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:18.223441 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:18.223477 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:18.437807 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:18.526677 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:18.722560 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:18.722599 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:18.937695 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:19.026572 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:19.222575 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:19.222597 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:19.437736 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:19.538140 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:19.722063 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:19.722106 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:19.936871 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:20.026909 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:20.222852 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:20.222888 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:20.438153 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:20.526713 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:20.723076 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:20.723107 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:20.937375 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:21.027021 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:21.223318 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:21.223386 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:21.502792 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:21.605449 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:21.723511 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:21.724605 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:22.003005 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:22.103134 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:22.224303 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:22.224664 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:22.510951 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:22.602543 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:22.722957 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:22.806362 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:23.006285 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:23.026037 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:23.223774 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:23.223843 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:23.438224 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:23.527025 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:23.723416 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:23.723657 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:23.938290 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:24.027111 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:24.222286 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:24.222341 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:24.437222 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:24.526073 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:24.722251 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:24.722335 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:24.938051 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:25.027061 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:25.223320 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:25.223581 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:25.438069 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:25.526889 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:25.723558 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:25.723630 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:25.937856 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:26.026506 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:26.222926 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:26.223059 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:26.437264 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:26.526257 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:26.722800 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:26.722818 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:26.937858 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:27.026722 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:27.222578 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:27.222652 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:27.437721 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:27.526589 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:27.723001 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:27.723145 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:27.937400 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:28.026300 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:28.222727 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:28.222885 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:28.438203 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:28.527004 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:28.723170 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:28.723248 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:28.937523 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:29.026422 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:29.222638 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:29.222894 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:29.438320 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:29.526567 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:29.722936 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:29.723013 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:29.937552 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:30.026454 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:30.222964 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:30.222978 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:30.437812 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:30.526629 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:30.722792 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:30.722888 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:30.938023 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:31.026779 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:31.222812 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:31.222862 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:31.438210 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:31.527051 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:31.722962 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:31.723012 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:31.938039 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:32.026916 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:32.223468 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:32.223506 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:32.440185 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:32.638834 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:32.723315 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:32.723349 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:32.937584 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:33.026504 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:33.222990 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:33.223137 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:33.438435 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:33.526065 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:33.726342 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:33.726465 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:34.004009 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:34.103362 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:34.302067 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:34.302372 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 06:02:34.505366 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:34.526696 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:34.724283 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:34.802921 1521981 kapi.go:107] duration metric: took 1m22.583722768s to wait for kubernetes.io/minikube-addons=registry ...
	I0904 06:02:35.005246 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:35.026386 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:35.223388 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:35.504110 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:35.526934 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:35.723457 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:36.003851 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:36.026501 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:36.222751 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:36.438123 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:36.527089 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:36.722453 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:36.886718 1521981 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 06:02:36.937450 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:37.027165 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:37.222549 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:37.437904 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:37.526508 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:37.722838 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:37.916850 1521981 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.030088047s)
	W0904 06:02:37.916896 1521981 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 06:02:37.916924 1521981 retry.go:31] will retry after 16.640687933s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 06:02:37.938204 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:38.026798 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:38.224849 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:38.437890 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:38.526427 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:38.722597 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:38.938176 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:39.027126 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:39.223555 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:39.437948 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:39.526864 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:39.723545 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:39.940815 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:40.041808 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:40.223428 1521981 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 06:02:40.441225 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:40.539418 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:40.722820 1521981 kapi.go:107] duration metric: took 1m28.503714163s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0904 06:02:40.939025 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:41.039078 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:41.531350 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:41.531647 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:41.937565 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:42.026460 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:42.437696 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:42.526426 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:42.938387 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:43.026350 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:43.438424 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:43.525852 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:43.938238 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:44.026345 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 06:02:44.438549 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:44.538537 1521981 kapi.go:107] duration metric: took 1m28.015521122s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0904 06:02:44.563646 1521981 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-306757 cluster.
	I0904 06:02:44.612192 1521981 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0904 06:02:44.618283 1521981 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0904 06:02:44.938594 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:45.437901 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:45.937686 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:46.437921 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:46.937354 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:47.438943 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:47.937919 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:48.437780 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:48.937901 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:49.438173 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:49.937694 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:50.438202 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:50.937514 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:51.437153 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:51.938398 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:52.438724 1521981 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 06:02:52.937443 1521981 kapi.go:107] duration metric: took 1m39.503414846s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0904 06:02:54.559002 1521981 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0904 06:02:55.097411 1521981 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0904 06:02:55.097525 1521981 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0904 06:02:55.100380 1521981 out.go:179] * Enabled addons: amd-gpu-device-plugin, ingress-dns, storage-provisioner, registry-creds, nvidia-device-plugin, cloud-spanner, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0904 06:02:55.102191 1521981 addons.go:514] duration metric: took 1m48.802802971s for enable addons: enabled=[amd-gpu-device-plugin ingress-dns storage-provisioner registry-creds nvidia-device-plugin cloud-spanner metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0904 06:02:55.102239 1521981 start.go:246] waiting for cluster config update ...
	I0904 06:02:55.102266 1521981 start.go:255] writing updated cluster config ...
	I0904 06:02:55.102527 1521981 ssh_runner.go:195] Run: rm -f paused
	I0904 06:02:55.106021 1521981 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 06:02:55.109741 1521981 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wgmn5" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:02:55.113919 1521981 pod_ready.go:94] pod "coredns-66bc5c9577-wgmn5" is "Ready"
	I0904 06:02:55.113940 1521981 pod_ready.go:86] duration metric: took 4.17718ms for pod "coredns-66bc5c9577-wgmn5" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:02:55.115753 1521981 pod_ready.go:83] waiting for pod "etcd-addons-306757" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:02:55.119237 1521981 pod_ready.go:94] pod "etcd-addons-306757" is "Ready"
	I0904 06:02:55.119259 1521981 pod_ready.go:86] duration metric: took 3.47765ms for pod "etcd-addons-306757" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:02:55.121223 1521981 pod_ready.go:83] waiting for pod "kube-apiserver-addons-306757" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:02:55.124658 1521981 pod_ready.go:94] pod "kube-apiserver-addons-306757" is "Ready"
	I0904 06:02:55.124677 1521981 pod_ready.go:86] duration metric: took 3.438322ms for pod "kube-apiserver-addons-306757" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:02:55.126521 1521981 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-306757" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:02:55.510137 1521981 pod_ready.go:94] pod "kube-controller-manager-addons-306757" is "Ready"
	I0904 06:02:55.510167 1521981 pod_ready.go:86] duration metric: took 383.612963ms for pod "kube-controller-manager-addons-306757" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:02:55.710150 1521981 pod_ready.go:83] waiting for pod "kube-proxy-wmldx" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:02:56.110643 1521981 pod_ready.go:94] pod "kube-proxy-wmldx" is "Ready"
	I0904 06:02:56.110676 1521981 pod_ready.go:86] duration metric: took 400.495802ms for pod "kube-proxy-wmldx" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:02:56.310612 1521981 pod_ready.go:83] waiting for pod "kube-scheduler-addons-306757" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:02:56.709557 1521981 pod_ready.go:94] pod "kube-scheduler-addons-306757" is "Ready"
	I0904 06:02:56.709588 1521981 pod_ready.go:86] duration metric: took 398.94897ms for pod "kube-scheduler-addons-306757" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:02:56.709599 1521981 pod_ready.go:40] duration metric: took 1.603547467s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 06:02:56.754023 1521981 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0904 06:02:56.755858 1521981 out.go:179] * Done! kubectl is now configured to use "addons-306757" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 04 06:05:01 addons-306757 crio[1046]: time="2025-09-04 06:05:01.123334496Z" level=info msg="Removed pod sandbox: 6d351a8ae0cb343babb1a3cfe3d9ec629ea42b1051da28f3d99dce4341c3ffb9" id=4a21be42-8ac6-4193-96a1-99019803a182 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 04 06:06:03 addons-306757 crio[1046]: time="2025-09-04 06:06:03.584239152Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-wx9jg/POD" id=54e97bf9-e63f-462b-b5c9-abb8b6d265a1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 04 06:06:03 addons-306757 crio[1046]: time="2025-09-04 06:06:03.584326458Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 04 06:06:03 addons-306757 crio[1046]: time="2025-09-04 06:06:03.603427788Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-wx9jg Namespace:default ID:58a09dca18a653b9386121048a8b079d5db2a933c2b85ae1326eb82878522ec0 UID:bdb8ec94-8ccb-48fb-be65-849fab4978f2 NetNS:/var/run/netns/a37c7b5d-52f6-424f-921f-afb091df390c Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 04 06:06:03 addons-306757 crio[1046]: time="2025-09-04 06:06:03.603472716Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-wx9jg to CNI network \"kindnet\" (type=ptp)"
	Sep 04 06:06:03 addons-306757 crio[1046]: time="2025-09-04 06:06:03.612546608Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-wx9jg Namespace:default ID:58a09dca18a653b9386121048a8b079d5db2a933c2b85ae1326eb82878522ec0 UID:bdb8ec94-8ccb-48fb-be65-849fab4978f2 NetNS:/var/run/netns/a37c7b5d-52f6-424f-921f-afb091df390c Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 04 06:06:03 addons-306757 crio[1046]: time="2025-09-04 06:06:03.612662046Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-wx9jg for CNI network kindnet (type=ptp)"
	Sep 04 06:06:03 addons-306757 crio[1046]: time="2025-09-04 06:06:03.615062672Z" level=info msg="Ran pod sandbox 58a09dca18a653b9386121048a8b079d5db2a933c2b85ae1326eb82878522ec0 with infra container: default/hello-world-app-5d498dc89-wx9jg/POD" id=54e97bf9-e63f-462b-b5c9-abb8b6d265a1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 04 06:06:03 addons-306757 crio[1046]: time="2025-09-04 06:06:03.616233212Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=e612f72e-3829-4701-877e-055ce3902321 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:06:03 addons-306757 crio[1046]: time="2025-09-04 06:06:03.616434750Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=e612f72e-3829-4701-877e-055ce3902321 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:06:03 addons-306757 crio[1046]: time="2025-09-04 06:06:03.617038272Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=9f342811-942a-4acf-bbde-0cb778e9e629 name=/runtime.v1.ImageService/PullImage
	Sep 04 06:06:03 addons-306757 crio[1046]: time="2025-09-04 06:06:03.628176952Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Sep 04 06:06:03 addons-306757 crio[1046]: time="2025-09-04 06:06:03.776584093Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Sep 04 06:06:04 addons-306757 crio[1046]: time="2025-09-04 06:06:04.213721836Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6" id=9f342811-942a-4acf-bbde-0cb778e9e629 name=/runtime.v1.ImageService/PullImage
	Sep 04 06:06:04 addons-306757 crio[1046]: time="2025-09-04 06:06:04.214270286Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=ab36a739-5a61-4e8e-b844-200a354b18a5 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:06:04 addons-306757 crio[1046]: time="2025-09-04 06:06:04.214886876Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86],Size_:4944818,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=ab36a739-5a61-4e8e-b844-200a354b18a5 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:06:04 addons-306757 crio[1046]: time="2025-09-04 06:06:04.215711704Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=83780d09-3538-4274-b189-8c781301805a name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:06:04 addons-306757 crio[1046]: time="2025-09-04 06:06:04.216412273Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86],Size_:4944818,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=83780d09-3538-4274-b189-8c781301805a name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:06:04 addons-306757 crio[1046]: time="2025-09-04 06:06:04.219552397Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-wx9jg/hello-world-app" id=d4e2211d-2a07-4a75-a924-6c2cb05c01df name=/runtime.v1.RuntimeService/CreateContainer
	Sep 04 06:06:04 addons-306757 crio[1046]: time="2025-09-04 06:06:04.219654350Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 04 06:06:04 addons-306757 crio[1046]: time="2025-09-04 06:06:04.233521852Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e541b4c3577c17f70078ed938225a7dd48d3edf300ecc72761b2257a6d83a7ef/merged/etc/passwd: no such file or directory"
	Sep 04 06:06:04 addons-306757 crio[1046]: time="2025-09-04 06:06:04.233556436Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e541b4c3577c17f70078ed938225a7dd48d3edf300ecc72761b2257a6d83a7ef/merged/etc/group: no such file or directory"
	Sep 04 06:06:04 addons-306757 crio[1046]: time="2025-09-04 06:06:04.281725893Z" level=info msg="Created container 00ff8b35215c7bdd281397ba1ce41efe1bde7ba93e04adf0c6208f8499aedac3: default/hello-world-app-5d498dc89-wx9jg/hello-world-app" id=d4e2211d-2a07-4a75-a924-6c2cb05c01df name=/runtime.v1.RuntimeService/CreateContainer
	Sep 04 06:06:04 addons-306757 crio[1046]: time="2025-09-04 06:06:04.282393175Z" level=info msg="Starting container: 00ff8b35215c7bdd281397ba1ce41efe1bde7ba93e04adf0c6208f8499aedac3" id=d0a3c19f-733e-4659-8acd-7a2391e5d892 name=/runtime.v1.RuntimeService/StartContainer
	Sep 04 06:06:04 addons-306757 crio[1046]: time="2025-09-04 06:06:04.289513635Z" level=info msg="Started container" PID=12302 containerID=00ff8b35215c7bdd281397ba1ce41efe1bde7ba93e04adf0c6208f8499aedac3 description=default/hello-world-app-5d498dc89-wx9jg/hello-world-app id=d0a3c19f-733e-4659-8acd-7a2391e5d892 name=/runtime.v1.RuntimeService/StartContainer sandboxID=58a09dca18a653b9386121048a8b079d5db2a933c2b85ae1326eb82878522ec0
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	00ff8b35215c7       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   58a09dca18a65       hello-world-app-5d498dc89-wx9jg
	fcb2e091b71b7       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                              2 minutes ago            Running             nginx                     0                   41fb7c6e1be0c       nginx
	fbac3cdcbfe8e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago            Running             busybox                   0                   86beb21991091       busybox
	700fe81acbc58       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5768867a6776f266b9c9c6b8b32a069f9346493f7bede50c3dcb28859f36d506            3 minutes ago            Running             gadget                    0                   b671ce28db3bf       gadget-8q767
	c6c7d76922d54       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             3 minutes ago            Running             controller                0                   387249bb68cee       ingress-nginx-controller-9cc49f96f-sxxjl
	171775381de99       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             3 minutes ago            Running             local-path-provisioner    0                   d4fa3974c5f02       local-path-provisioner-648f6765c9-kz747
	4a1ea22a67b1b       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               3 minutes ago            Running             minikube-ingress-dns      0                   04029895fb7b3       kube-ingress-dns-minikube
	3f24a1bb9d511       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   4 minutes ago            Exited              patch                     0                   74cde72412964       ingress-nginx-admission-patch-tpjb9
	45388cacf71bd       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   4 minutes ago            Exited              create                    0                   2ebc8aea02549       ingress-nginx-admission-create-h96mz
	c06974bc28ee7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago            Running             storage-provisioner       0                   96758a07c2636       storage-provisioner
	156d4d1f58ead       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago            Running             coredns                   0                   fbef086e60923       coredns-66bc5c9577-wgmn5
	c5ed15c219379       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                             4 minutes ago            Running             kube-proxy                0                   7e2ee728b6012       kube-proxy-wmldx
	26f09e878633b       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                             4 minutes ago            Running             kindnet-cni               0                   034e178658054       kindnet-d697q
	2c4fc7fcac4e4       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             5 minutes ago            Running             etcd                      0                   b33e28825bbd0       etcd-addons-306757
	4b118c362dd8a       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                             5 minutes ago            Running             kube-apiserver            0                   854194715ce64       kube-apiserver-addons-306757
	0363a691e780d       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                             5 minutes ago            Running             kube-controller-manager   0                   c8003c0f778f5       kube-controller-manager-addons-306757
	ecb7ed3df4638       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                             5 minutes ago            Running             kube-scheduler            0                   b2752952b3c0e       kube-scheduler-addons-306757
	
	
	==> coredns [156d4d1f58eadfaf3b5f74eb4a6300ce2346f8bae323687cdc1a078a6fe57fec] <==
	[INFO] 10.244.0.18:46583 - 63344 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.005252733s
	[INFO] 10.244.0.18:45523 - 34974 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.00468903s
	[INFO] 10.244.0.18:45523 - 34734 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005095723s
	[INFO] 10.244.0.18:53712 - 5002 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005355464s
	[INFO] 10.244.0.18:53712 - 4737 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.00594343s
	[INFO] 10.244.0.18:35048 - 35989 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000152351s
	[INFO] 10.244.0.18:35048 - 35545 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000175988s
	[INFO] 10.244.0.21:42511 - 55736 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000207916s
	[INFO] 10.244.0.21:55077 - 137 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00029182s
	[INFO] 10.244.0.21:43646 - 28752 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000152841s
	[INFO] 10.244.0.21:46636 - 14064 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000096351s
	[INFO] 10.244.0.21:34595 - 7202 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00013763s
	[INFO] 10.244.0.21:44499 - 2134 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000176573s
	[INFO] 10.244.0.21:48304 - 12736 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003855029s
	[INFO] 10.244.0.21:42656 - 30872 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.005249853s
	[INFO] 10.244.0.21:38173 - 62264 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.006139813s
	[INFO] 10.244.0.21:50348 - 13582 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.006684687s
	[INFO] 10.244.0.21:43864 - 51170 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.004503506s
	[INFO] 10.244.0.21:52647 - 27141 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.004613234s
	[INFO] 10.244.0.21:45571 - 26299 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006789631s
	[INFO] 10.244.0.21:33351 - 48809 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007091458s
	[INFO] 10.244.0.21:55868 - 10561 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000769226s
	[INFO] 10.244.0.21:58808 - 4995 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.003093847s
	[INFO] 10.244.0.25:60167 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000235064s
	[INFO] 10.244.0.25:46408 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000155286s
	
	
	==> describe nodes <==
	Name:               addons-306757
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-306757
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3fa37de45a2901b215fab008201edf72ce5a1ff
	                    minikube.k8s.io/name=addons-306757
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_04T06_01_01_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-306757
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Sep 2025 06:00:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-306757
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Sep 2025 06:05:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Sep 2025 06:04:03 +0000   Thu, 04 Sep 2025 06:00:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Sep 2025 06:04:03 +0000   Thu, 04 Sep 2025 06:00:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Sep 2025 06:04:03 +0000   Thu, 04 Sep 2025 06:00:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Sep 2025 06:04:03 +0000   Thu, 04 Sep 2025 06:01:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-306757
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 0d2703e20efe43c1ab8cee00be568258
	  System UUID:                44f8c4c8-967b-490d-b2f1-1d9c65753b3e
	  Boot ID:                    04ef57f1-30be-45a2-b84c-b20b1e806bda
	  Kernel Version:             5.15.0-1083-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m7s
	  default                     hello-world-app-5d498dc89-wx9jg             0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  gadget                      gadget-8q767                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-sxxjl    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m52s
	  kube-system                 coredns-66bc5c9577-wgmn5                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m58s
	  kube-system                 etcd-addons-306757                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         5m4s
	  kube-system                 kindnet-d697q                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m58s
	  kube-system                 kube-apiserver-addons-306757                250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 kube-controller-manager-addons-306757       200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 kube-proxy-wmldx                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 kube-scheduler-addons-306757                100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  local-path-storage          local-path-provisioner-648f6765c9-kz747     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             310Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                   From             Message
	  ----     ------                   ----                  ----             -------
	  Normal   Starting                 4m53s                 kube-proxy       
	  Warning  CgroupV1                 5m10s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m9s (x8 over 5m10s)  kubelet          Node addons-306757 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m9s (x8 over 5m10s)  kubelet          Node addons-306757 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m9s (x8 over 5m10s)  kubelet          Node addons-306757 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m4s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m4s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m4s                  kubelet          Node addons-306757 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m4s                  kubelet          Node addons-306757 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m4s                  kubelet          Node addons-306757 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m59s                 node-controller  Node addons-306757 event: Registered Node addons-306757 in Controller
	  Normal   NodeReady                4m14s                 kubelet          Node addons-306757 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e2 e7 99 b7 01 f9 08 06
	[  +4.819792] IPv4: martian source 10.244.0.1 from 10.244.0.8, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 cb 31 e8 a7 d4 08 06
	[  +1.686116] IPv4: martian source 10.244.0.1 from 10.244.0.9, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 82 ab 22 c3 73 08 06
	[Sep 4 05:57] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2a 8e 77 75 56 51 08 06
	[  +0.292319] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2a 8e 77 75 56 51 08 06
	[ +25.895647] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 66 06 76 0b 88 08 06
	[Sep 4 06:03] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: d2 f6 9d 6b d4 8b 4a 61 0e 7c 94 34 08 00
	[  +1.006977] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: d2 f6 9d 6b d4 8b 4a 61 0e 7c 94 34 08 00
	[  +2.011803] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: d2 f6 9d 6b d4 8b 4a 61 0e 7c 94 34 08 00
	[  +4.255528] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d2 f6 9d 6b d4 8b 4a 61 0e 7c 94 34 08 00
	[Sep 4 06:04] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: d2 f6 9d 6b d4 8b 4a 61 0e 7c 94 34 08 00
	[ +16.126348] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000031] ll header: 00000000: d2 f6 9d 6b d4 8b 4a 61 0e 7c 94 34 08 00
	[ +34.044412] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d2 f6 9d 6b d4 8b 4a 61 0e 7c 94 34 08 00
	
	
	==> etcd [2c4fc7fcac4e48a1673e5d864269b7d50c52bcf68e671217c1fba35318dfb893] <==
	{"level":"warn","ts":"2025-09-04T06:01:10.316646Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.344546ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-306757\" limit:1 ","response":"range_response_count:1 size:5511"}
	{"level":"info","ts":"2025-09-04T06:01:10.316764Z","caller":"traceutil/trace.go:172","msg":"trace[1836363787] transaction","detail":"{read_only:false; response_revision:461; number_of_response:1; }","duration":"101.273227ms","start":"2025-09-04T06:01:10.215478Z","end":"2025-09-04T06:01:10.316752Z","steps":["trace[1836363787] 'process raft request'  (duration: 100.708859ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T06:01:10.316796Z","caller":"traceutil/trace.go:172","msg":"trace[813865068] transaction","detail":"{read_only:false; response_revision:456; number_of_response:1; }","duration":"104.827838ms","start":"2025-09-04T06:01:10.211963Z","end":"2025-09-04T06:01:10.316791Z","steps":["trace[813865068] 'process raft request'  (duration: 104.061022ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T06:01:10.316899Z","caller":"traceutil/trace.go:172","msg":"trace[1682919252] transaction","detail":"{read_only:false; response_revision:458; number_of_response:1; }","duration":"103.801223ms","start":"2025-09-04T06:01:10.213086Z","end":"2025-09-04T06:01:10.316887Z","steps":["trace[1682919252] 'process raft request'  (duration: 103.02272ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T06:01:10.316908Z","caller":"traceutil/trace.go:172","msg":"trace[1066468381] transaction","detail":"{read_only:false; response_revision:459; number_of_response:1; }","duration":"103.772369ms","start":"2025-09-04T06:01:10.213129Z","end":"2025-09-04T06:01:10.316902Z","steps":["trace[1066468381] 'process raft request'  (duration: 103.009626ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T06:01:10.316930Z","caller":"traceutil/trace.go:172","msg":"trace[341157719] transaction","detail":"{read_only:false; response_revision:457; number_of_response:1; }","duration":"104.049793ms","start":"2025-09-04T06:01:10.212875Z","end":"2025-09-04T06:01:10.316924Z","steps":["trace[341157719] 'process raft request'  (duration: 103.196354ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T06:01:10.316936Z","caller":"traceutil/trace.go:172","msg":"trace[1766951050] transaction","detail":"{read_only:false; response_revision:460; number_of_response:1; }","duration":"101.868679ms","start":"2025-09-04T06:01:10.215063Z","end":"2025-09-04T06:01:10.316931Z","steps":["trace[1766951050] 'process raft request'  (duration: 101.101975ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T06:01:10.317113Z","caller":"traceutil/trace.go:172","msg":"trace[1294257128] transaction","detail":"{read_only:false; response_revision:462; number_of_response:1; }","duration":"101.458084ms","start":"2025-09-04T06:01:10.215647Z","end":"2025-09-04T06:01:10.317105Z","steps":["trace[1294257128] 'process raft request'  (duration: 100.566137ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T06:01:10.405386Z","caller":"traceutil/trace.go:172","msg":"trace[1343114806] range","detail":"{range_begin:/registry/minions/addons-306757; range_end:; response_count:1; response_revision:466; }","duration":"190.077232ms","start":"2025-09-04T06:01:10.215281Z","end":"2025-09-04T06:01:10.405358Z","steps":["trace[1343114806] 'agreement among raft nodes before linearized reading'  (duration: 101.254094ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-04T06:01:10.616284Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.182685ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kube-system/registry-creds-764b6fb674\" limit:1 ","response":"range_response_count:1 size:4654"}
	{"level":"info","ts":"2025-09-04T06:01:10.616361Z","caller":"traceutil/trace.go:172","msg":"trace[1825959956] range","detail":"{range_begin:/registry/replicasets/kube-system/registry-creds-764b6fb674; range_end:; response_count:1; response_revision:477; }","duration":"101.270736ms","start":"2025-09-04T06:01:10.515076Z","end":"2025-09-04T06:01:10.616347Z","steps":["trace[1825959956] 'agreement among raft nodes before linearized reading'  (duration: 101.107124ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T06:01:10.616838Z","caller":"traceutil/trace.go:172","msg":"trace[1374386832] transaction","detail":"{read_only:false; response_revision:479; number_of_response:1; }","duration":"107.163802ms","start":"2025-09-04T06:01:10.509662Z","end":"2025-09-04T06:01:10.616826Z","steps":["trace[1374386832] 'process raft request'  (duration: 107.129546ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T06:01:10.617079Z","caller":"traceutil/trace.go:172","msg":"trace[1926669606] transaction","detail":"{read_only:false; response_revision:478; number_of_response:1; }","duration":"107.467754ms","start":"2025-09-04T06:01:10.509597Z","end":"2025-09-04T06:01:10.617065Z","steps":["trace[1926669606] 'process raft request'  (duration: 107.088743ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-04T06:01:13.905192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:01:13.912366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:01:35.249459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:01:35.255989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:01:35.273871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:01:35.281055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:02:32.637339Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.847989ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-04T06:02:32.637444Z","caller":"traceutil/trace.go:172","msg":"trace[1452055534] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1181; }","duration":"111.970611ms","start":"2025-09-04T06:02:32.525457Z","end":"2025-09-04T06:02:32.637428Z","steps":["trace[1452055534] 'range keys from in-memory index tree'  (duration: 111.739182ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-04T06:02:32.637346Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.866328ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:497"}
	{"level":"info","ts":"2025-09-04T06:02:32.637636Z","caller":"traceutil/trace.go:172","msg":"trace[1107574242] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1181; }","duration":"112.16354ms","start":"2025-09-04T06:02:32.525457Z","end":"2025-09-04T06:02:32.637621Z","steps":["trace[1107574242] 'range keys from in-memory index tree'  (duration: 111.678573ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T06:03:24.133052Z","caller":"traceutil/trace.go:172","msg":"trace[543892696] transaction","detail":"{read_only:false; response_revision:1453; number_of_response:1; }","duration":"102.942083ms","start":"2025-09-04T06:03:24.030087Z","end":"2025-09-04T06:03:24.133029Z","steps":["trace[543892696] 'process raft request'  (duration: 102.826651ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T06:03:34.423451Z","caller":"traceutil/trace.go:172","msg":"trace[1828971221] transaction","detail":"{read_only:false; response_revision:1570; number_of_response:1; }","duration":"116.630074ms","start":"2025-09-04T06:03:34.306798Z","end":"2025-09-04T06:03:34.423428Z","steps":["trace[1828971221] 'process raft request'  (duration: 51.890702ms)","trace[1828971221] 'compare'  (duration: 64.630169ms)"],"step_count":2}
	
	
	==> kernel <==
	 06:06:04 up  3:48,  0 users,  load average: 0.68, 2.14, 2.90
	Linux addons-306757 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [26f09e878633badf36d137ced908bcb99bd4bf9bbe6c37e49577ca9a550321ed] <==
	I0904 06:04:00.013851       1 main.go:301] handling current node
	I0904 06:04:10.016613       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 06:04:10.016731       1 main.go:301] handling current node
	I0904 06:04:20.015952       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 06:04:20.016011       1 main.go:301] handling current node
	I0904 06:04:30.012757       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 06:04:30.012796       1 main.go:301] handling current node
	I0904 06:04:40.013173       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 06:04:40.013210       1 main.go:301] handling current node
	I0904 06:04:50.015873       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 06:04:50.015905       1 main.go:301] handling current node
	I0904 06:05:00.015900       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 06:05:00.015940       1 main.go:301] handling current node
	I0904 06:05:10.011922       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 06:05:10.011959       1 main.go:301] handling current node
	I0904 06:05:20.015129       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 06:05:20.015165       1 main.go:301] handling current node
	I0904 06:05:30.015967       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 06:05:30.016004       1 main.go:301] handling current node
	I0904 06:05:40.010359       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 06:05:40.010399       1 main.go:301] handling current node
	I0904 06:05:50.016314       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 06:05:50.016352       1 main.go:301] handling current node
	I0904 06:06:00.017804       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 06:06:00.017840       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4b118c362dd8aacd65061b561d4c9854d5759661e039d72fa1faa4dc857d1690] <==
	E0904 06:03:06.437447       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:56462: use of closed network connection
	E0904 06:03:06.616848       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:56482: use of closed network connection
	I0904 06:03:12.886712       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 06:03:15.604234       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.152.218"}
	I0904 06:03:38.227094       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 06:03:41.466882       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0904 06:03:41.710543       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.47.189"}
	I0904 06:03:43.304277       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0904 06:03:45.812772       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I0904 06:04:08.612408       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0904 06:04:08.612462       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0904 06:04:08.629404       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0904 06:04:08.629561       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0904 06:04:08.641169       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0904 06:04:08.641203       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0904 06:04:08.658010       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0904 06:04:08.658050       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0904 06:04:09.630168       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0904 06:04:09.700347       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0904 06:04:09.708180       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0904 06:04:17.336190       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0904 06:04:33.470518       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 06:04:56.208713       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 06:05:58.424947       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 06:06:03.396831       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.86.47"}
	
	
	==> kube-controller-manager [0363a691e780de3ae3df0656b56679b8cccd14505f0a47f077a4f9539d69440e] <==
	E0904 06:04:20.221553       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0904 06:04:28.958430       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0904 06:04:28.959451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0904 06:04:29.098349       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0904 06:04:29.099355       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0904 06:04:31.088354       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0904 06:04:31.089491       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I0904 06:04:35.433311       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I0904 06:04:35.433359       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0904 06:04:35.433398       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I0904 06:04:35.433435       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0904 06:04:45.663173       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0904 06:04:45.664266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0904 06:04:49.624996       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0904 06:04:49.626029       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0904 06:04:51.132606       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0904 06:04:51.133667       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0904 06:05:18.731461       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0904 06:05:18.732651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0904 06:05:19.311313       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0904 06:05:19.312374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0904 06:05:21.900587       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0904 06:05:21.901619       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0904 06:05:57.764059       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0904 06:05:57.765063       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [c5ed15c219379cd83daee401c5b45d8191550722806c02b85cf43cd65402c75c] <==
	I0904 06:01:09.614800       1 server_linux.go:53] "Using iptables proxy"
	I0904 06:01:10.515478       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0904 06:01:10.617033       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0904 06:01:10.617077       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0904 06:01:10.617163       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0904 06:01:11.214447       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0904 06:01:11.214602       1 server_linux.go:132] "Using iptables Proxier"
	I0904 06:01:11.304926       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0904 06:01:11.306196       1 server.go:527] "Version info" version="v1.34.0"
	I0904 06:01:11.306372       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 06:01:11.308067       1 config.go:200] "Starting service config controller"
	I0904 06:01:11.308077       1 config.go:403] "Starting serviceCIDR config controller"
	I0904 06:01:11.309065       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0904 06:01:11.308097       1 config.go:106] "Starting endpoint slice config controller"
	I0904 06:01:11.309119       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0904 06:01:11.308614       1 config.go:309] "Starting node config controller"
	I0904 06:01:11.309133       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0904 06:01:11.309139       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0904 06:01:11.309036       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0904 06:01:11.411296       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0904 06:01:11.411334       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0904 06:01:11.411465       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [ecb7ed3df463839a88ef7b5dc459407d70724b2c76ccf353d08d09ad860c515e] <==
	E0904 06:00:58.313610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0904 06:00:58.314037       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0904 06:00:58.315332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0904 06:00:58.315338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0904 06:00:58.315481       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0904 06:00:58.315572       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0904 06:00:58.315656       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0904 06:00:58.315788       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0904 06:00:58.315906       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0904 06:00:58.316094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0904 06:00:58.316107       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0904 06:00:58.316184       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0904 06:00:58.316347       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0904 06:00:58.316857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0904 06:00:59.212476       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0904 06:00:59.226662       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0904 06:00:59.248889       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0904 06:00:59.250764       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0904 06:00:59.298750       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0904 06:00:59.304912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0904 06:00:59.331390       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0904 06:00:59.377631       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0904 06:00:59.384719       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0904 06:00:59.418188       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I0904 06:01:02.011186       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 04 06:05:30 addons-306757 kubelet[1653]: E0904 06:05:30.865125    1653 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756965930864822160  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 04 06:05:30 addons-306757 kubelet[1653]: E0904 06:05:30.865160    1653 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756965930864822160  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 04 06:05:40 addons-306757 kubelet[1653]: E0904 06:05:40.868298    1653 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756965940868013303  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 04 06:05:40 addons-306757 kubelet[1653]: E0904 06:05:40.868340    1653 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756965940868013303  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 04 06:05:50 addons-306757 kubelet[1653]: E0904 06:05:50.803367    1653 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/9b65936ccfdf16e64bf3703516dde63140015d8e03293ad6f226a428f2b2cb0d/diff" to get inode usage: stat /var/lib/containers/storage/overlay/9b65936ccfdf16e64bf3703516dde63140015d8e03293ad6f226a428f2b2cb0d/diff: no such file or directory, extraDiskErr: <nil>
	Sep 04 06:05:50 addons-306757 kubelet[1653]: E0904 06:05:50.870407    1653 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756965950870137644  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 04 06:05:50 addons-306757 kubelet[1653]: E0904 06:05:50.870441    1653 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756965950870137644  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 04 06:06:00 addons-306757 kubelet[1653]: E0904 06:06:00.762951    1653 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/e5f8208ab399db5b5e207ded2823d558730c8116b7a1e1fb242334b014c412ff/diff" to get inode usage: stat /var/lib/containers/storage/overlay/e5f8208ab399db5b5e207ded2823d558730c8116b7a1e1fb242334b014c412ff/diff: no such file or directory, extraDiskErr: <nil>
	Sep 04 06:06:00 addons-306757 kubelet[1653]: E0904 06:06:00.767176    1653 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/2c15cf47ad8b7b3090363ce2a543d4a716f29ee71c8ba65352a1c361209e54e8/diff" to get inode usage: stat /var/lib/containers/storage/overlay/2c15cf47ad8b7b3090363ce2a543d4a716f29ee71c8ba65352a1c361209e54e8/diff: no such file or directory, extraDiskErr: <nil>
	Sep 04 06:06:00 addons-306757 kubelet[1653]: E0904 06:06:00.770462    1653 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/fb0ef6e58deaebed9caacfbae85332dc67a0af8841aa06eec0e12ded5921f25d/diff" to get inode usage: stat /var/lib/containers/storage/overlay/fb0ef6e58deaebed9caacfbae85332dc67a0af8841aa06eec0e12ded5921f25d/diff: no such file or directory, extraDiskErr: <nil>
	Sep 04 06:06:00 addons-306757 kubelet[1653]: E0904 06:06:00.774753    1653 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/e5f8208ab399db5b5e207ded2823d558730c8116b7a1e1fb242334b014c412ff/diff" to get inode usage: stat /var/lib/containers/storage/overlay/e5f8208ab399db5b5e207ded2823d558730c8116b7a1e1fb242334b014c412ff/diff: no such file or directory, extraDiskErr: <nil>
	Sep 04 06:06:00 addons-306757 kubelet[1653]: E0904 06:06:00.805199    1653 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/280ab400b1bb8511bd0ff9d083048948de9ca886634dd73ce6c1ba701372c6cd/diff" to get inode usage: stat /var/lib/containers/storage/overlay/280ab400b1bb8511bd0ff9d083048948de9ca886634dd73ce6c1ba701372c6cd/diff: no such file or directory, extraDiskErr: <nil>
	Sep 04 06:06:00 addons-306757 kubelet[1653]: E0904 06:06:00.805318    1653 container_manager_linux.go:562] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /docker/1897291195d7a2a68643143a3accf64d5a2250a3dd3808236a1c1e6d41d54801, memory: /docker/1897291195d7a2a68643143a3accf64d5a2250a3dd3808236a1c1e6d41d54801/system.slice/kubelet.service"
	Sep 04 06:06:00 addons-306757 kubelet[1653]: E0904 06:06:00.810941    1653 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/9b65936ccfdf16e64bf3703516dde63140015d8e03293ad6f226a428f2b2cb0d/diff" to get inode usage: stat /var/lib/containers/storage/overlay/9b65936ccfdf16e64bf3703516dde63140015d8e03293ad6f226a428f2b2cb0d/diff: no such file or directory, extraDiskErr: <nil>
	Sep 04 06:06:00 addons-306757 kubelet[1653]: E0904 06:06:00.812897    1653 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/280ab400b1bb8511bd0ff9d083048948de9ca886634dd73ce6c1ba701372c6cd/diff" to get inode usage: stat /var/lib/containers/storage/overlay/280ab400b1bb8511bd0ff9d083048948de9ca886634dd73ce6c1ba701372c6cd/diff: no such file or directory, extraDiskErr: <nil>
	Sep 04 06:06:00 addons-306757 kubelet[1653]: E0904 06:06:00.818172    1653 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/b0de5106e3725912e468c1ff24ac53c765c18adffdfab360304684122413b328/diff" to get inode usage: stat /var/lib/containers/storage/overlay/b0de5106e3725912e468c1ff24ac53c765c18adffdfab360304684122413b328/diff: no such file or directory, extraDiskErr: <nil>
	Sep 04 06:06:00 addons-306757 kubelet[1653]: E0904 06:06:00.818202    1653 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/fb0ef6e58deaebed9caacfbae85332dc67a0af8841aa06eec0e12ded5921f25d/diff" to get inode usage: stat /var/lib/containers/storage/overlay/fb0ef6e58deaebed9caacfbae85332dc67a0af8841aa06eec0e12ded5921f25d/diff: no such file or directory, extraDiskErr: <nil>
	Sep 04 06:06:00 addons-306757 kubelet[1653]: E0904 06:06:00.827488    1653 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/b0de5106e3725912e468c1ff24ac53c765c18adffdfab360304684122413b328/diff" to get inode usage: stat /var/lib/containers/storage/overlay/b0de5106e3725912e468c1ff24ac53c765c18adffdfab360304684122413b328/diff: no such file or directory, extraDiskErr: <nil>
	Sep 04 06:06:00 addons-306757 kubelet[1653]: E0904 06:06:00.827531    1653 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/ba3d6508dcc657eca992e4855a75435dd4581ce2024bdc18a4736b84dd5e1959/diff" to get inode usage: stat /var/lib/containers/storage/overlay/ba3d6508dcc657eca992e4855a75435dd4581ce2024bdc18a4736b84dd5e1959/diff: no such file or directory, extraDiskErr: <nil>
	Sep 04 06:06:00 addons-306757 kubelet[1653]: E0904 06:06:00.829730    1653 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/2c15cf47ad8b7b3090363ce2a543d4a716f29ee71c8ba65352a1c361209e54e8/diff" to get inode usage: stat /var/lib/containers/storage/overlay/2c15cf47ad8b7b3090363ce2a543d4a716f29ee71c8ba65352a1c361209e54e8/diff: no such file or directory, extraDiskErr: <nil>
	Sep 04 06:06:00 addons-306757 kubelet[1653]: E0904 06:06:00.829746    1653 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/ba3d6508dcc657eca992e4855a75435dd4581ce2024bdc18a4736b84dd5e1959/diff" to get inode usage: stat /var/lib/containers/storage/overlay/ba3d6508dcc657eca992e4855a75435dd4581ce2024bdc18a4736b84dd5e1959/diff: no such file or directory, extraDiskErr: <nil>
	Sep 04 06:06:00 addons-306757 kubelet[1653]: E0904 06:06:00.872508    1653 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756965960872286279  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 04 06:06:00 addons-306757 kubelet[1653]: E0904 06:06:00.872538    1653 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756965960872286279  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 04 06:06:03 addons-306757 kubelet[1653]: I0904 06:06:03.305578    1653 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v6s5\" (UniqueName: \"kubernetes.io/projected/bdb8ec94-8ccb-48fb-be65-849fab4978f2-kube-api-access-9v6s5\") pod \"hello-world-app-5d498dc89-wx9jg\" (UID: \"bdb8ec94-8ccb-48fb-be65-849fab4978f2\") " pod="default/hello-world-app-5d498dc89-wx9jg"
	Sep 04 06:06:03 addons-306757 kubelet[1653]: W0904 06:06:03.614501    1653 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/1897291195d7a2a68643143a3accf64d5a2250a3dd3808236a1c1e6d41d54801/crio-58a09dca18a653b9386121048a8b079d5db2a933c2b85ae1326eb82878522ec0 WatchSource:0}: Error finding container 58a09dca18a653b9386121048a8b079d5db2a933c2b85ae1326eb82878522ec0: Status 404 returned error can't find the container with id 58a09dca18a653b9386121048a8b079d5db2a933c2b85ae1326eb82878522ec0
	
	
	==> storage-provisioner [c06974bc28ee768cffd1cb943fc989c5d1cbb8e1c1984289cee980459cf2a911] <==
	W0904 06:05:40.755192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:05:42.758421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:05:42.762210       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:05:44.765865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:05:44.770269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:05:46.773473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:05:46.777538       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:05:48.780718       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:05:48.784686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:05:50.788023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:05:50.791839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:05:52.795320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:05:52.799470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:05:54.802508       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:05:54.806614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:05:56.809799       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:05:56.814571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:05:58.817197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:05:58.821088       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:06:00.824978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:06:00.829071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:06:02.832132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:06:02.836855       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:06:04.840523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:06:04.844065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-306757 -n addons-306757
helpers_test.go:269: (dbg) Run:  kubectl --context addons-306757 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-h96mz ingress-nginx-admission-patch-tpjb9
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-306757 describe pod ingress-nginx-admission-create-h96mz ingress-nginx-admission-patch-tpjb9
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-306757 describe pod ingress-nginx-admission-create-h96mz ingress-nginx-admission-patch-tpjb9: exit status 1 (56.096295ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-h96mz" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-tpjb9" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-306757 describe pod ingress-nginx-admission-create-h96mz ingress-nginx-admission-patch-tpjb9: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-306757 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-306757 addons disable ingress-dns --alsologtostderr -v=1: (1.464340644s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-306757 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-306757 addons disable ingress --alsologtostderr -v=1: (7.63943292s)
--- FAIL: TestAddons/parallel/Ingress (153.59s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-856205 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-856205 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-856205 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-856205 --alsologtostderr -v=1] stderr:
I0904 06:09:59.177779 1562874 out.go:360] Setting OutFile to fd 1 ...
I0904 06:09:59.178038 1562874 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 06:09:59.178047 1562874 out.go:374] Setting ErrFile to fd 2...
I0904 06:09:59.178051 1562874 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 06:09:59.178248 1562874 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1516970/.minikube/bin
I0904 06:09:59.178475 1562874 mustload.go:65] Loading cluster: functional-856205
I0904 06:09:59.178853 1562874 config.go:182] Loaded profile config "functional-856205": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0904 06:09:59.179212 1562874 cli_runner.go:164] Run: docker container inspect functional-856205 --format={{.State.Status}}
I0904 06:09:59.196152 1562874 host.go:66] Checking if "functional-856205" exists ...
I0904 06:09:59.196397 1562874 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0904 06:09:59.258424 1562874 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-09-04 06:09:59.247054018 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0904 06:09:59.258543 1562874 api_server.go:166] Checking apiserver status ...
I0904 06:09:59.258610 1562874 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0904 06:09:59.258657 1562874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-856205
I0904 06:09:59.276269 1562874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33969 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/functional-856205/id_rsa Username:docker}
I0904 06:09:59.411880 1562874 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5543/cgroup
I0904 06:09:59.421525 1562874 api_server.go:182] apiserver freezer: "12:freezer:/docker/d7c15d351271d6cababc6a5b92dbedfd65c7a93b1ef81d8e4091b33f9381be75/crio/crio-771936d22e57c182d02189a3ee4f00cb34c26beeea34a37be8a97c913b60d937"
I0904 06:09:59.421592 1562874 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d7c15d351271d6cababc6a5b92dbedfd65c7a93b1ef81d8e4091b33f9381be75/crio/crio-771936d22e57c182d02189a3ee4f00cb34c26beeea34a37be8a97c913b60d937/freezer.state
I0904 06:09:59.432193 1562874 api_server.go:204] freezer state: "THAWED"
I0904 06:09:59.432229 1562874 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I0904 06:09:59.438027 1562874 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W0904 06:09:59.438078 1562874 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I0904 06:09:59.438281 1562874 config.go:182] Loaded profile config "functional-856205": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0904 06:09:59.438307 1562874 addons.go:69] Setting dashboard=true in profile "functional-856205"
I0904 06:09:59.438318 1562874 addons.go:238] Setting addon dashboard=true in "functional-856205"
I0904 06:09:59.438354 1562874 host.go:66] Checking if "functional-856205" exists ...
I0904 06:09:59.438816 1562874 cli_runner.go:164] Run: docker container inspect functional-856205 --format={{.State.Status}}
I0904 06:09:59.460032 1562874 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0904 06:09:59.461388 1562874 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0904 06:09:59.462356 1562874 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0904 06:09:59.462378 1562874 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0904 06:09:59.462455 1562874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-856205
I0904 06:09:59.479080 1562874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33969 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/functional-856205/id_rsa Username:docker}
I0904 06:09:59.616740 1562874 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0904 06:09:59.616774 1562874 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0904 06:09:59.636980 1562874 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0904 06:09:59.637007 1562874 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0904 06:09:59.706912 1562874 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0904 06:09:59.706936 1562874 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0904 06:09:59.726019 1562874 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0904 06:09:59.726043 1562874 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0904 06:09:59.743463 1562874 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0904 06:09:59.743493 1562874 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0904 06:09:59.807382 1562874 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0904 06:09:59.807414 1562874 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0904 06:09:59.827298 1562874 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0904 06:09:59.827333 1562874 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0904 06:09:59.846291 1562874 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0904 06:09:59.846326 1562874 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0904 06:09:59.916118 1562874 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0904 06:09:59.916153 1562874 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0904 06:09:59.935100 1562874 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0904 06:10:00.821397 1562874 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-856205 addons enable metrics-server

                                                
                                                
I0904 06:10:00.822840 1562874 addons.go:201] Writing out "functional-856205" config to set dashboard=true...
W0904 06:10:00.823056 1562874 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I0904 06:10:00.823656 1562874 kapi.go:59] client config for functional-856205: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/functional-856205/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/functional-856205/client.key", CAFile:"/home/jenkins/minikube-integration/21409-1516970/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25a3920), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0904 06:10:00.824134 1562874 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0904 06:10:00.824153 1562874 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0904 06:10:00.824160 1562874 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0904 06:10:00.824167 1562874 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0904 06:10:00.824172 1562874 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I0904 06:10:00.831569 1562874 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  4fe0ba69-c319-452d-9af1-fedf1501770e 852 0 2025-09-04 06:10:00 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-09-04 06:10:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.106.97.11,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.106.97.11],IPFamilies:[IPv4],AllocateLoadBalancerN
odePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0904 06:10:00.831702 1562874 out.go:285] * Launching proxy ...
* Launching proxy ...
I0904 06:10:00.831756 1562874 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-856205 proxy --port 36195]
I0904 06:10:00.832085 1562874 dashboard.go:157] Waiting for kubectl to output host:port ...
I0904 06:10:00.873799 1562874 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W0904 06:10:00.873866 1562874 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I0904 06:10:00.881902 1562874 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[dcfa169f-55a3-4a6c-b92f-505010a21bdd] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 06:10:00 GMT]] Body:0xc0008e34c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00003a000 TLS:<nil>}
I0904 06:10:00.881980 1562874 retry.go:31] will retry after 66.882µs: Temporary Error: unexpected response code: 503
I0904 06:10:00.885079 1562874 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e2e2fd3a-1138-4a48-9b45-988bc3790fcd] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 06:10:00 GMT]] Body:0xc00096b1c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004aca00 TLS:<nil>}
I0904 06:10:00.885137 1562874 retry.go:31] will retry after 161.027µs: Temporary Error: unexpected response code: 503
I0904 06:10:00.888041 1562874 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ce9dab57-0646-4819-893b-57a7082ed552] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 06:10:00 GMT]] Body:0xc000882900 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206f00 TLS:<nil>}
I0904 06:10:00.888086 1562874 retry.go:31] will retry after 117.413µs: Temporary Error: unexpected response code: 503
I0904 06:10:00.890923 1562874 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[dd6096cf-ebc7-4d68-b486-fa9946484f85] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 06:10:00 GMT]] Body:0xc0008829c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00003a140 TLS:<nil>}
I0904 06:10:00.891002 1562874 retry.go:31] will retry after 408.077µs: Temporary Error: unexpected response code: 503
I0904 06:10:00.893837 1562874 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c37c1c5b-411e-416f-8c1a-ae5df3da6a2e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 06:10:00 GMT]] Body:0xc0008e3600 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00003a280 TLS:<nil>}
I0904 06:10:00.893882 1562874 retry.go:31] will retry after 655.187µs: Temporary Error: unexpected response code: 503
I0904 06:10:00.896646 1562874 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b572114a-f5fc-4b15-9e26-7f562ce87010] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 06:10:00 GMT]] Body:0xc00096b440 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004acb40 TLS:<nil>}
I0904 06:10:00.896688 1562874 retry.go:31] will retry after 1.098456ms: Temporary Error: unexpected response code: 503
I0904 06:10:00.900520 1562874 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8f3a282f-0a51-4832-9427-4cf8d53ab736] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 06:10:00 GMT]] Body:0xc0008e3700 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207040 TLS:<nil>}
I0904 06:10:00.900556 1562874 retry.go:31] will retry after 1.239669ms: Temporary Error: unexpected response code: 503
I0904 06:10:00.904424 1562874 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[eaa52ebe-4c2e-4446-ad7c-49b1754a1704] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 06:10:00 GMT]] Body:0xc000882b00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004acc80 TLS:<nil>}
I0904 06:10:00.904466 1562874 retry.go:31] will retry after 1.480952ms: Temporary Error: unexpected response code: 503
I0904 06:10:00.910862 1562874 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[01624e4a-9e30-45f7-8e83-4cd10c6fc2a1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 06:10:00 GMT]] Body:0xc00096b540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00003a3c0 TLS:<nil>}
I0904 06:10:00.910898 1562874 retry.go:31] will retry after 3.643989ms: Temporary Error: unexpected response code: 503
I0904 06:10:00.917013 1562874 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ef695bc5-36b2-48d5-8682-92aed4f3a7da] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 06:10:00 GMT]] Body:0xc0008e3840 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207180 TLS:<nil>}
I0904 06:10:00.917061 1562874 retry.go:31] will retry after 3.62098ms: Temporary Error: unexpected response code: 503
I0904 06:10:00.923236 1562874 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cb4cbe40-7cf2-433a-a26b-3f72ad40018f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 06:10:00 GMT]] Body:0xc000882c40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004acdc0 TLS:<nil>}
I0904 06:10:00.923280 1562874 retry.go:31] will retry after 8.010927ms: Temporary Error: unexpected response code: 503
I0904 06:10:00.933209 1562874 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f5ed41f6-4a2c-4013-9408-123baccb913f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 06:10:00 GMT]] Body:0xc000882e00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00003a500 TLS:<nil>}
I0904 06:10:00.933263 1562874 retry.go:31] will retry after 7.462008ms: Temporary Error: unexpected response code: 503
I0904 06:10:00.943303 1562874 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[51ec6376-f4c3-469d-8238-5d7eca3be2d4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 06:10:00 GMT]] Body:0xc0009d4000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00003a640 TLS:<nil>}
I0904 06:10:00.943338 1562874 retry.go:31] will retry after 13.997953ms: Temporary Error: unexpected response code: 503
I0904 06:10:00.959537 1562874 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a1fc5242-106b-479e-b434-a47655eacbb4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 06:10:00 GMT]] Body:0xc00096b640 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00003a780 TLS:<nil>}
I0904 06:10:00.959599 1562874 retry.go:31] will retry after 20.434305ms: Temporary Error: unexpected response code: 503
I0904 06:10:00.982874 1562874 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c6f2a4e6-4166-4b55-8bce-95210c8fbb9a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 06:10:00 GMT]] Body:0xc0008e3940 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002072c0 TLS:<nil>}
I0904 06:10:00.982928 1562874 retry.go:31] will retry after 38.225868ms: Temporary Error: unexpected response code: 503
I0904 06:10:01.024914 1562874 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6856132e-deb0-4d96-b956-f4a1ca6b00ab] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 06:10:01 GMT]] Body:0xc0009d41c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004ad040 TLS:<nil>}
I0904 06:10:01.025004 1562874 retry.go:31] will retry after 59.771678ms: Temporary Error: unexpected response code: 503
I0904 06:10:01.088088 1562874 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0796c8c8-a71b-4cdc-8a44-3f8fec746467] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 06:10:01 GMT]] Body:0xc00096b740 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00003a8c0 TLS:<nil>}
I0904 06:10:01.088172 1562874 retry.go:31] will retry after 33.100359ms: Temporary Error: unexpected response code: 503
I0904 06:10:01.124960 1562874 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8f3e35b8-37a9-4d1f-99d6-33663be37c09] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 06:10:01 GMT]] Body:0xc0009d4340 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207680 TLS:<nil>}
I0904 06:10:01.125022 1562874 retry.go:31] will retry after 64.331557ms: Temporary Error: unexpected response code: 503
I0904 06:10:01.193206 1562874 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[412a53ca-399d-45db-a267-464811b077c1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 06:10:01 GMT]] Body:0xc0009d4440 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00003aa00 TLS:<nil>}
I0904 06:10:01.193268 1562874 retry.go:31] will retry after 154.526334ms: Temporary Error: unexpected response code: 503
I0904 06:10:01.351221 1562874 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[974f5536-6566-4f8a-beb3-d258413a093e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 06:10:01 GMT]] Body:0xc0008e3a80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00003ab40 TLS:<nil>}
I0904 06:10:01.351281 1562874 retry.go:31] will retry after 182.297273ms: Temporary Error: unexpected response code: 503
I0904 06:10:01.537591 1562874 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d2a03e8a-a007-4f7f-8b72-d63bbd3832f9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 06:10:01 GMT]] Body:0xc00096b880 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004ad180 TLS:<nil>}
I0904 06:10:01.537667 1562874 retry.go:31] will retry after 482.648375ms: Temporary Error: unexpected response code: 503
I0904 06:10:02.023889 1562874 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3988110f-5a5b-4ff1-9b40-2e9074373e74] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 06:10:02 GMT]] Body:0xc0009d4640 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002077c0 TLS:<nil>}
I0904 06:10:02.023967 1562874 retry.go:31] will retry after 431.687322ms: Temporary Error: unexpected response code: 503
I0904 06:10:02.459753 1562874 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[03cdc8e4-564f-44e1-bfc3-6919f2142642] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 06:10:02 GMT]] Body:0xc00096ba00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00003ac80 TLS:<nil>}
I0904 06:10:02.459843 1562874 retry.go:31] will retry after 1.081895905s: Temporary Error: unexpected response code: 503
I0904 06:10:03.544876 1562874 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[549cd939-ea50-41e6-89d1-12cf9fc92845] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 06:10:03 GMT]] Body:0xc0008e3b80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207900 TLS:<nil>}
I0904 06:10:03.544938 1562874 retry.go:31] will retry after 1.249295799s: Temporary Error: unexpected response code: 503
I0904 06:10:04.797283 1562874 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5f8172f6-eb25-4d48-8efa-90020f70d49d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 06:10:04 GMT]] Body:0xc0008e3c40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004ad2c0 TLS:<nil>}
I0904 06:10:04.797347 1562874 retry.go:31] will retry after 936.378768ms: Temporary Error: unexpected response code: 503
I0904 06:10:05.737568 1562874 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2ed105cd-414c-4f5e-b287-8f3aa8f140b8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 06:10:05 GMT]] Body:0xc0009d4980 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004ad400 TLS:<nil>}
I0904 06:10:05.737635 1562874 retry.go:31] will retry after 2.895631803s: Temporary Error: unexpected response code: 503
I0904 06:10:08.637239 1562874 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[06e4fc91-de12-4ebc-859b-5e2f8b75a595] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 06:10:08 GMT]] Body:0xc0008e3d40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00003adc0 TLS:<nil>}
I0904 06:10:08.637327 1562874 retry.go:31] will retry after 3.035233689s: Temporary Error: unexpected response code: 503
I0904 06:10:11.676503 1562874 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ccc99903-0dee-4a07-a14b-8abf942aafdb] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 06:10:11 GMT]] Body:0xc0008e3dc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207a40 TLS:<nil>}
I0904 06:10:11.676582 1562874 retry.go:31] will retry after 7.654002666s: Temporary Error: unexpected response code: 503
I0904 06:10:19.334227 1562874 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[97294a30-467a-4690-829c-019c5329789c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 06:10:19 GMT]] Body:0xc0009d4b00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004ad540 TLS:<nil>}
I0904 06:10:19.334337 1562874 retry.go:31] will retry after 5.192266028s: Temporary Error: unexpected response code: 503
I0904 06:10:24.529696 1562874 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0b259bef-8b95-4d7a-8822-1e5029c8d737] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 06:10:24 GMT]] Body:0xc00096bd00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00003af00 TLS:<nil>}
I0904 06:10:24.529788 1562874 retry.go:31] will retry after 11.072318757s: Temporary Error: unexpected response code: 503
I0904 06:10:35.606523 1562874 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a325f15e-a55d-453a-bd9a-4bcb99e7849b] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 06:10:35 GMT]] Body:0xc00096bdc0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207b80 TLS:<nil>}
I0904 06:10:35.606602 1562874 retry.go:31] will retry after 15.00661079s: Temporary Error: unexpected response code: 503
I0904 06:10:50.617381 1562874 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b5cdd50f-8238-4e9d-ae6f-e291fd6b1006] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 06:10:50 GMT]] Body:0xc0008e3ec0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00003b040 TLS:<nil>}
I0904 06:10:50.617451 1562874 retry.go:31] will retry after 20.468029458s: Temporary Error: unexpected response code: 503
I0904 06:11:11.089331 1562874 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[dae87a69-e6ae-41e7-86e7-799b77df0405] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 06:11:11 GMT]] Body:0xc00188a000 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004ad680 TLS:<nil>}
I0904 06:11:11.089407 1562874 retry.go:31] will retry after 36.046292115s: Temporary Error: unexpected response code: 503
I0904 06:11:47.141313 1562874 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5e2fc14e-9870-4d0f-8b63-f4eab4a64796] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 06:11:47 GMT]] Body:0xc00188a080 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207cc0 TLS:<nil>}
I0904 06:11:47.141390 1562874 retry.go:31] will retry after 1m26.834119642s: Temporary Error: unexpected response code: 503
I0904 06:13:13.979432 1562874 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b504a5c3-f9af-49b5-86e6-42f291733253] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 06:13:13 GMT]] Body:0xc001866080 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00052e000 TLS:<nil>}
I0904 06:13:13.979524 1562874 retry.go:31] will retry after 1m24.732823636s: Temporary Error: unexpected response code: 503
I0904 06:14:38.719065 1562874 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[82dad3de-cfc4-4779-bb3a-b5df1bdc847a] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 04 Sep 2025 06:14:38 GMT]] Body:0xc0018660c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00052e140 TLS:<nil>}
I0904 06:14:38.719167 1562874 retry.go:31] will retry after 1m24.616321327s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-856205
helpers_test.go:243: (dbg) docker inspect functional-856205:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d7c15d351271d6cababc6a5b92dbedfd65c7a93b1ef81d8e4091b33f9381be75",
	        "Created": "2025-09-04T06:07:11.035642359Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1546419,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-04T06:07:11.067723153Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6f7d8b3ae805e64eb4efe058a75d43d384fe5989473cee7f8e24ea90eca28309",
	        "ResolvConfPath": "/var/lib/docker/containers/d7c15d351271d6cababc6a5b92dbedfd65c7a93b1ef81d8e4091b33f9381be75/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d7c15d351271d6cababc6a5b92dbedfd65c7a93b1ef81d8e4091b33f9381be75/hostname",
	        "HostsPath": "/var/lib/docker/containers/d7c15d351271d6cababc6a5b92dbedfd65c7a93b1ef81d8e4091b33f9381be75/hosts",
	        "LogPath": "/var/lib/docker/containers/d7c15d351271d6cababc6a5b92dbedfd65c7a93b1ef81d8e4091b33f9381be75/d7c15d351271d6cababc6a5b92dbedfd65c7a93b1ef81d8e4091b33f9381be75-json.log",
	        "Name": "/functional-856205",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-856205:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-856205",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d7c15d351271d6cababc6a5b92dbedfd65c7a93b1ef81d8e4091b33f9381be75",
	                "LowerDir": "/var/lib/docker/overlay2/d33139dae79187bfd277f343bc0f354a677e09aebbbb6dfcaf24e951d2156502-init/diff:/var/lib/docker/overlay2/00af8677cb60c76ca825d07bd2d1267a5f0b2d8d1147a86a8eb7a1b8e0189af8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d33139dae79187bfd277f343bc0f354a677e09aebbbb6dfcaf24e951d2156502/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d33139dae79187bfd277f343bc0f354a677e09aebbbb6dfcaf24e951d2156502/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d33139dae79187bfd277f343bc0f354a677e09aebbbb6dfcaf24e951d2156502/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-856205",
	                "Source": "/var/lib/docker/volumes/functional-856205/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-856205",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-856205",
	                "name.minikube.sigs.k8s.io": "functional-856205",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5c84ddeed4ee403c03a5ccfb87a054e52dcc4b1db8fd2bf7b8979144a3f5519e",
	            "SandboxKey": "/var/run/docker/netns/5c84ddeed4ee",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33969"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33970"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33973"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33971"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33972"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-856205": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:af:b8:75:73:1a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8db23358731e7cff5328ad69c486affb0ff9c40289b5dbaf6ead93c1165a1548",
	                    "EndpointID": "9a0d1cc729f0f792a8fa4a51b8bda2614262a3339ee4c316798bc1db23df5daf",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-856205",
	                        "d7c15d351271"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-856205 -n functional-856205
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-856205 logs -n 25: (1.369684254s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                   ARGS                                                    │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-856205 ssh echo hello                                                                          │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:09 UTC │ 04 Sep 25 06:09 UTC │
	│ ssh            │ functional-856205 ssh cat /etc/hostname                                                                   │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:09 UTC │ 04 Sep 25 06:09 UTC │
	│ tunnel         │ functional-856205 tunnel --alsologtostderr                                                                │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:09 UTC │                     │
	│ tunnel         │ functional-856205 tunnel --alsologtostderr                                                                │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:09 UTC │                     │
	│ ssh            │ functional-856205 ssh findmnt -T /mount1                                                                  │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:09 UTC │ 04 Sep 25 06:09 UTC │
	│ ssh            │ functional-856205 ssh findmnt -T /mount2                                                                  │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:09 UTC │ 04 Sep 25 06:09 UTC │
	│ tunnel         │ functional-856205 tunnel --alsologtostderr                                                                │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:09 UTC │                     │
	│ ssh            │ functional-856205 ssh findmnt -T /mount3                                                                  │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:09 UTC │ 04 Sep 25 06:09 UTC │
	│ mount          │ -p functional-856205 --kill=true                                                                          │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:09 UTC │                     │
	│ addons         │ functional-856205 addons list                                                                             │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:09 UTC │ 04 Sep 25 06:09 UTC │
	│ addons         │ functional-856205 addons list -o json                                                                     │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:09 UTC │ 04 Sep 25 06:09 UTC │
	│ start          │ -p functional-856205 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:09 UTC │                     │
	│ start          │ -p functional-856205 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio           │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:09 UTC │                     │
	│ start          │ -p functional-856205 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:09 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-856205 --alsologtostderr -v=1                                            │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:09 UTC │                     │
	│ update-context │ functional-856205 update-context --alsologtostderr -v=2                                                   │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:10 UTC │ 04 Sep 25 06:10 UTC │
	│ update-context │ functional-856205 update-context --alsologtostderr -v=2                                                   │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:10 UTC │ 04 Sep 25 06:10 UTC │
	│ update-context │ functional-856205 update-context --alsologtostderr -v=2                                                   │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:10 UTC │ 04 Sep 25 06:10 UTC │
	│ image          │ functional-856205 image ls --format short --alsologtostderr                                               │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:10 UTC │ 04 Sep 25 06:10 UTC │
	│ image          │ functional-856205 image ls --format yaml --alsologtostderr                                                │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:10 UTC │ 04 Sep 25 06:10 UTC │
	│ ssh            │ functional-856205 ssh pgrep buildkitd                                                                     │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:10 UTC │                     │
	│ image          │ functional-856205 image build -t localhost/my-image:functional-856205 testdata/build --alsologtostderr    │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:10 UTC │ 04 Sep 25 06:10 UTC │
	│ image          │ functional-856205 image ls                                                                                │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:10 UTC │ 04 Sep 25 06:10 UTC │
	│ image          │ functional-856205 image ls --format json --alsologtostderr                                                │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:10 UTC │ 04 Sep 25 06:10 UTC │
	│ image          │ functional-856205 image ls --format table --alsologtostderr                                               │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:10 UTC │ 04 Sep 25 06:10 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/04 06:09:59
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 06:09:59.010801 1562791 out.go:360] Setting OutFile to fd 1 ...
	I0904 06:09:59.011057 1562791 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:09:59.011068 1562791 out.go:374] Setting ErrFile to fd 2...
	I0904 06:09:59.011074 1562791 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:09:59.011411 1562791 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1516970/.minikube/bin
	I0904 06:09:59.011988 1562791 out.go:368] Setting JSON to false
	I0904 06:09:59.013235 1562791 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":13949,"bootTime":1756952250,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 06:09:59.013348 1562791 start.go:140] virtualization: kvm guest
	I0904 06:09:59.014926 1562791 out.go:179] * [functional-856205] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	I0904 06:09:59.016529 1562791 out.go:179]   - MINIKUBE_LOCATION=21409
	I0904 06:09:59.016531 1562791 notify.go:220] Checking for updates...
	I0904 06:09:59.019540 1562791 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 06:09:59.021121 1562791 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-1516970/kubeconfig
	I0904 06:09:59.022435 1562791 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1516970/.minikube
	I0904 06:09:59.023708 1562791 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 06:09:59.024941 1562791 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 06:09:59.026499 1562791 config.go:182] Loaded profile config "functional-856205": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 06:09:59.027039 1562791 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 06:09:59.051784 1562791 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0904 06:09:59.051905 1562791 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 06:09:59.107671 1562791 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-09-04 06:09:59.09700648 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 06:09:59.107787 1562791 docker.go:318] overlay module found
	I0904 06:09:59.110288 1562791 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0904 06:09:59.111655 1562791 start.go:304] selected driver: docker
	I0904 06:09:59.111672 1562791 start.go:918] validating driver "docker" against &{Name:functional-856205 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-856205 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 06:09:59.111789 1562791 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 06:09:59.114592 1562791 out.go:203] 
	W0904 06:09:59.115941 1562791 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0904 06:09:59.117320 1562791 out.go:203] 
	
	
	==> CRI-O <==
	Sep 04 06:12:34 functional-856205 crio[4996]: time="2025-09-04 06:12:34.616740330Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=9873fd77-3469-483c-9b8f-84a7d49a8bcc name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:12:46 functional-856205 crio[4996]: time="2025-09-04 06:12:46.616721720Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=1596e50b-88ca-42b0-a9f1-0d6b2ea7728d name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:12:46 functional-856205 crio[4996]: time="2025-09-04 06:12:46.617049605Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=1596e50b-88ca-42b0-a9f1-0d6b2ea7728d name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:12:47 functional-856205 crio[4996]: time="2025-09-04 06:12:47.616545911Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=a244a917-1fdf-4ef2-a648-395cc9b024e4 name=/runtime.v1.ImageService/PullImage
	Sep 04 06:12:57 functional-856205 crio[4996]: time="2025-09-04 06:12:57.616587583Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=fef51243-7453-41e2-842a-56e3881250bf name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:12:57 functional-856205 crio[4996]: time="2025-09-04 06:12:57.616886177Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=fef51243-7453-41e2-842a-56e3881250bf name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:12:57 functional-856205 crio[4996]: time="2025-09-04 06:12:57.617490633Z" level=info msg="Pulling image: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=8bca805f-d26a-42bc-912c-93d91603e5ef name=/runtime.v1.ImageService/PullImage
	Sep 04 06:12:57 functional-856205 crio[4996]: time="2025-09-04 06:12:57.622745291Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 04 06:13:39 functional-856205 crio[4996]: time="2025-09-04 06:13:39.618253365Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=f2e4b7b5-2792-41e8-ac4e-a6cb3748a446 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:13:39 functional-856205 crio[4996]: time="2025-09-04 06:13:39.618523393Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=f2e4b7b5-2792-41e8-ac4e-a6cb3748a446 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:13:53 functional-856205 crio[4996]: time="2025-09-04 06:13:53.616994626Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=3e50fd1b-7eb0-4440-9d61-47b2eee7fbaa name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:13:53 functional-856205 crio[4996]: time="2025-09-04 06:13:53.617334294Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=3e50fd1b-7eb0-4440-9d61-47b2eee7fbaa name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:13:54 functional-856205 crio[4996]: time="2025-09-04 06:13:54.616871771Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=db434132-88aa-4ed4-b4ed-01b88534f657 name=/runtime.v1.ImageService/PullImage
	Sep 04 06:14:08 functional-856205 crio[4996]: time="2025-09-04 06:14:08.616371918Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=70335bb1-405d-4604-ad43-0e0991fa76dc name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:14:08 functional-856205 crio[4996]: time="2025-09-04 06:14:08.616655168Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=70335bb1-405d-4604-ad43-0e0991fa76dc name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:14:19 functional-856205 crio[4996]: time="2025-09-04 06:14:19.616970763Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=99837aa5-3418-4fc8-9550-1264033c8bed name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:14:19 functional-856205 crio[4996]: time="2025-09-04 06:14:19.617314924Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=99837aa5-3418-4fc8-9550-1264033c8bed name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:14:31 functional-856205 crio[4996]: time="2025-09-04 06:14:31.616909492Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=e99e0263-3168-45d9-a594-b368b1879645 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:14:31 functional-856205 crio[4996]: time="2025-09-04 06:14:31.617343138Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=e99e0263-3168-45d9-a594-b368b1879645 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:14:43 functional-856205 crio[4996]: time="2025-09-04 06:14:43.616091045Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=46b713d8-3a60-4fab-9ffb-869ce692f384 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:14:43 functional-856205 crio[4996]: time="2025-09-04 06:14:43.616536433Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=46b713d8-3a60-4fab-9ffb-869ce692f384 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:14:56 functional-856205 crio[4996]: time="2025-09-04 06:14:56.615770484Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=b4eee3d8-a83e-4150-baac-0e842e326688 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:14:56 functional-856205 crio[4996]: time="2025-09-04 06:14:56.616104932Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=b4eee3d8-a83e-4150-baac-0e842e326688 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:14:56 functional-856205 crio[4996]: time="2025-09-04 06:14:56.616805912Z" level=info msg="Pulling image: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=ed243aa9-5749-42c7-9b36-f1454cf62d0c name=/runtime.v1.ImageService/PullImage
	Sep 04 06:14:56 functional-856205 crio[4996]: time="2025-09-04 06:14:56.624792615Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	6489ddfca5443       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   4 minutes ago       Running             dashboard-metrics-scraper   0                   9552e723ee0e5       dashboard-metrics-scraper-77bf4d6c4c-sbxr7
	0f7539f2504e3       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                  4 minutes ago       Running             nginx                       0                   a6f815279342e       nginx-svc
	d9bb5da3ea85f       docker.io/library/nginx@sha256:33e0bbc7ca9ecf108140af6288c7c9d1ecc77548cbfd3952fd8466a75edefe57                  5 minutes ago       Running             myfrontend                  0                   d9ac997ceed30       sp-pod
	e406a5a33e982       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              5 minutes ago       Exited              mount-munger                0                   ead61ea1a7ecd       busybox-mount
	7a3de8ebe3c94       docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb                  5 minutes ago       Running             mysql                       0                   c2e45d80d5c54       mysql-5bb876957f-vfcgg
	a4542298855b7       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 5 minutes ago       Running             coredns                     2                   db9b14ee5303e       coredns-66bc5c9577-qt799
	45872c4231d69       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                 5 minutes ago       Running             kube-proxy                  2                   de4817ee09ece       kube-proxy-9d6ws
	1f7565fa89e33       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 5 minutes ago       Running             kindnet-cni                 2                   26c3fed2a9856       kindnet-2788m
	ec50b019b4219       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 5 minutes ago       Running             storage-provisioner         3                   97f3c489ed1d9       storage-provisioner
	771936d22e57c       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                 5 minutes ago       Running             kube-apiserver              0                   d13b4a409b186       kube-apiserver-functional-856205
	4d6b7b0accf43       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                 5 minutes ago       Running             kube-controller-manager     2                   d4ae4d74e4d05       kube-controller-manager-functional-856205
	c0d9825afa367       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                 5 minutes ago       Running             kube-scheduler              2                   76cab2c03a837       kube-scheduler-functional-856205
	5c27ea60586da       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 5 minutes ago       Running             etcd                        2                   a9e72a325305a       etcd-functional-856205
	3e2dde27e07f9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 6 minutes ago       Exited              storage-provisioner         2                   97f3c489ed1d9       storage-provisioner
	c7b52abe07155       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                 6 minutes ago       Exited              kube-scheduler              1                   76cab2c03a837       kube-scheduler-functional-856205
	3d296839d66fd       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                 6 minutes ago       Exited              kube-controller-manager     1                   d4ae4d74e4d05       kube-controller-manager-functional-856205
	4f71ad57d461e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 6 minutes ago       Exited              etcd                        1                   a9e72a325305a       etcd-functional-856205
	77e8aa8d9f2ad       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 6 minutes ago       Exited              coredns                     1                   db9b14ee5303e       coredns-66bc5c9577-qt799
	13225950240b3       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 6 minutes ago       Exited              kindnet-cni                 1                   26c3fed2a9856       kindnet-2788m
	b0d09719dc3eb       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                 6 minutes ago       Exited              kube-proxy                  1                   de4817ee09ece       kube-proxy-9d6ws
	
	
	==> coredns [77e8aa8d9f2ad6c6e7de599be18ce68e6ceffd0d1b64154b30871700d4ac685c] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58497 - 40261 "HINFO IN 1560186451247838692.6315298664168261141. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.098624079s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a4542298855b750fd2d580ac0659afec08a2a08745fcaf5e0b9806e05251988e] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60773 - 42611 "HINFO IN 4738126952861082454.2946721953905675424. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.039442542s
	
	
	==> describe nodes <==
	Name:               functional-856205
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-856205
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3fa37de45a2901b215fab008201edf72ce5a1ff
	                    minikube.k8s.io/name=functional-856205
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_04T06_07_26_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Sep 2025 06:07:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-856205
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Sep 2025 06:14:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Sep 2025 06:10:45 +0000   Thu, 04 Sep 2025 06:07:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Sep 2025 06:10:45 +0000   Thu, 04 Sep 2025 06:07:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Sep 2025 06:10:45 +0000   Thu, 04 Sep 2025 06:07:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Sep 2025 06:10:45 +0000   Thu, 04 Sep 2025 06:08:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-856205
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 1db52e3d7b2744a7bf7c17dbd15b2b07
	  System UUID:                27d32824-d70e-4671-9d12-e8d9e33531ea
	  Boot ID:                    04ef57f1-30be-45a2-b84c-b20b1e806bda
	  Kernel Version:             5.15.0-1083-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-7pjg2                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m29s
	  default                     hello-node-connect-7d85dfc575-ls6vq           0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  default                     mysql-5bb876957f-vfcgg                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     5m25s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m2s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 coredns-66bc5c9577-qt799                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m29s
	  kube-system                 etcd-functional-856205                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m35s
	  kube-system                 kindnet-2788m                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m30s
	  kube-system                 kube-apiserver-functional-856205              250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m47s
	  kube-system                 kube-controller-manager-functional-856205     200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m35s
	  kube-system                 kube-proxy-9d6ws                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m30s
	  kube-system                 kube-scheduler-functional-856205              100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m35s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m28s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-sbxr7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-lgmr4         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m27s                  kube-proxy       
	  Normal   Starting                 5m45s                  kube-proxy       
	  Normal   Starting                 6m29s                  kube-proxy       
	  Warning  CgroupV1                 7m35s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  7m35s                  kubelet          Node functional-856205 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m35s                  kubelet          Node functional-856205 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m35s                  kubelet          Node functional-856205 status is now: NodeHasSufficientPID
	  Normal   Starting                 7m35s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           7m30s                  node-controller  Node functional-856205 event: Registered Node functional-856205 in Controller
	  Normal   NodeReady                6m48s                  kubelet          Node functional-856205 status is now: NodeReady
	  Warning  ContainerGCFailed        6m35s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           6m27s                  node-controller  Node functional-856205 event: Registered Node functional-856205 in Controller
	  Normal   Starting                 5m52s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m52s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m51s (x8 over 5m52s)  kubelet          Node functional-856205 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m51s (x8 over 5m52s)  kubelet          Node functional-856205 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m51s (x8 over 5m52s)  kubelet          Node functional-856205 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m44s                  node-controller  Node functional-856205 event: Registered Node functional-856205 in Controller
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e2 e7 99 b7 01 f9 08 06
	[  +4.819792] IPv4: martian source 10.244.0.1 from 10.244.0.8, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 cb 31 e8 a7 d4 08 06
	[  +1.686116] IPv4: martian source 10.244.0.1 from 10.244.0.9, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 82 ab 22 c3 73 08 06
	[Sep 4 05:57] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2a 8e 77 75 56 51 08 06
	[  +0.292319] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2a 8e 77 75 56 51 08 06
	[ +25.895647] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 66 06 76 0b 88 08 06
	[Sep 4 06:03] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: d2 f6 9d 6b d4 8b 4a 61 0e 7c 94 34 08 00
	[  +1.006977] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: d2 f6 9d 6b d4 8b 4a 61 0e 7c 94 34 08 00
	[  +2.011803] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: d2 f6 9d 6b d4 8b 4a 61 0e 7c 94 34 08 00
	[  +4.255528] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d2 f6 9d 6b d4 8b 4a 61 0e 7c 94 34 08 00
	[Sep 4 06:04] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: d2 f6 9d 6b d4 8b 4a 61 0e 7c 94 34 08 00
	[ +16.126348] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000031] ll header: 00000000: d2 f6 9d 6b d4 8b 4a 61 0e 7c 94 34 08 00
	[ +34.044412] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d2 f6 9d 6b d4 8b 4a 61 0e 7c 94 34 08 00
	
	
	==> etcd [4f71ad57d461e7d38bfc166708d83f44b726e784b31b29b755d2135cf0e7d00f] <==
	{"level":"warn","ts":"2025-09-04T06:08:29.435633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:08:29.442279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:08:29.448058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:08:29.505154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:08:29.511319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:08:29.517631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:08:29.612813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53042","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-04T06:08:54.152985Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-04T06:08:54.153075Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-856205","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-04T06:08:54.153350Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-04T06:08:54.305754Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-04T06:08:54.305840Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-04T06:08:54.305876Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-09-04T06:08:54.305992Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-09-04T06:08:54.305952Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-04T06:08:54.306027Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-04T06:08:54.306045Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-04T06:08:54.305976Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-04T06:08:54.306066Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-04T06:08:54.306072Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-04T06:08:54.305994Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-09-04T06:08:54.309325Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-04T06:08:54.309394Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-04T06:08:54.309426Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-04T06:08:54.309439Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-856205","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [5c27ea60586daf8c59a7871c3ab63bea1f170435672da8f4573c0e3052de96a1] <==
	{"level":"warn","ts":"2025-09-04T06:09:12.050600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:09:12.056429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:09:12.062854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:09:12.110816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:09:12.124258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:09:12.130684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:09:12.138820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:09:12.145010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:09:12.150831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:09:12.204146Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:09:12.231978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:09:12.239459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:09:12.246153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:09:12.252862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:09:12.259026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:09:12.265159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:09:12.272361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:09:12.300107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:09:12.307305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:09:12.313801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:09:12.320043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:09:12.325912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:09:12.354537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:09:12.361982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:09:12.368252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57618","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 06:15:00 up  3:57,  0 users,  load average: 0.06, 0.75, 1.91
	Linux functional-856205 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [13225950240b3534b25d0e7c54e06fd2eb4d6e6d1b64e65029efc6b789a8280f] <==
	I0904 06:08:27.205059       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0904 06:08:27.205305       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0904 06:08:27.205496       1 main.go:148] setting mtu 1500 for CNI 
	I0904 06:08:27.205515       1 main.go:178] kindnetd IP family: "ipv4"
	I0904 06:08:27.205529       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-04T06:08:27Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0904 06:08:27.506040       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0904 06:08:27.506123       1 controller.go:381] "Waiting for informer caches to sync"
	I0904 06:08:27.506158       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0904 06:08:27.506373       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0904 06:08:30.407353       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0904 06:08:30.407394       1 metrics.go:72] Registering metrics
	I0904 06:08:30.407472       1 controller.go:711] "Syncing nftables rules"
	I0904 06:08:37.505928       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 06:08:37.506005       1 main.go:301] handling current node
	I0904 06:08:47.506772       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 06:08:47.506803       1 main.go:301] handling current node
	
	
	==> kindnet [1f7565fa89e33e274beff75f61697f2706ecaacd99e7049a0facb500f82ddfc8] <==
	I0904 06:12:54.600479       1 main.go:301] handling current node
	I0904 06:13:04.601473       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 06:13:04.601508       1 main.go:301] handling current node
	I0904 06:13:14.601198       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 06:13:14.601247       1 main.go:301] handling current node
	I0904 06:13:24.601029       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 06:13:24.601065       1 main.go:301] handling current node
	I0904 06:13:34.601670       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 06:13:34.601705       1 main.go:301] handling current node
	I0904 06:13:44.603954       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 06:13:44.603987       1 main.go:301] handling current node
	I0904 06:13:54.600494       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 06:13:54.600533       1 main.go:301] handling current node
	I0904 06:14:04.607905       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 06:14:04.607936       1 main.go:301] handling current node
	I0904 06:14:14.605138       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 06:14:14.605184       1 main.go:301] handling current node
	I0904 06:14:24.601513       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 06:14:24.601547       1 main.go:301] handling current node
	I0904 06:14:34.601012       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 06:14:34.601067       1 main.go:301] handling current node
	I0904 06:14:44.609943       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 06:14:44.609980       1 main.go:301] handling current node
	I0904 06:14:54.601345       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 06:14:54.601394       1 main.go:301] handling current node
	
	
	==> kube-apiserver [771936d22e57c182d02189a3ee4f00cb34c26beeea34a37be8a97c913b60d937] <==
	I0904 06:09:16.460141       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0904 06:09:16.661258       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0904 06:09:16.810082       1 controller.go:667] quota admission added evaluator for: endpoints
	I0904 06:09:27.850431       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.107.195.203"}
	I0904 06:09:31.867470       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.111.102.59"}
	I0904 06:09:35.270116       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.103.121.193"}
	E0904 06:09:51.415600       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:37678: use of closed network connection
	E0904 06:09:52.478581       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:54388: use of closed network connection
	E0904 06:09:53.920412       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:54402: use of closed network connection
	E0904 06:09:56.644826       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:54436: use of closed network connection
	E0904 06:09:56.807508       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:54456: use of closed network connection
	I0904 06:09:58.106138       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.97.85.195"}
	I0904 06:10:00.527352       1 controller.go:667] quota admission added evaluator for: namespaces
	I0904 06:10:00.739362       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.97.11"}
	I0904 06:10:00.814359       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.122.216"}
	E0904 06:10:05.051876       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:41252: use of closed network connection
	I0904 06:10:05.175448       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.108.138.218"}
	I0904 06:10:22.612146       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 06:10:30.518252       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 06:11:40.797091       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 06:11:55.512132       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 06:13:05.595428       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 06:13:08.798259       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 06:14:15.654594       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 06:14:19.685760       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [3d296839d66fd9e6b36eed5dd5fc6ad1490e30223f583a07c9669caae39b0c0a] <==
	I0904 06:08:33.618566       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0904 06:08:33.618610       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0904 06:08:33.618638       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0904 06:08:33.618718       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0904 06:08:33.619838       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0904 06:08:33.621289       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0904 06:08:33.622570       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0904 06:08:33.622671       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0904 06:08:33.624521       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0904 06:08:33.626639       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0904 06:08:33.627842       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0904 06:08:33.630081       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0904 06:08:33.631273       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0904 06:08:33.633509       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0904 06:08:33.633618       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0904 06:08:33.634793       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0904 06:08:33.634814       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0904 06:08:33.637021       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0904 06:08:33.637126       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0904 06:08:33.637221       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-856205"
	I0904 06:08:33.637274       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0904 06:08:33.638134       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0904 06:08:33.640487       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0904 06:08:33.641689       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0904 06:08:33.641843       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [4d6b7b0accf43ca39608d20a14d951cebccb03971943583d29c042a956466383] <==
	I0904 06:09:16.407208       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0904 06:09:16.407268       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0904 06:09:16.407563       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0904 06:09:16.407694       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0904 06:09:16.407777       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-856205"
	I0904 06:09:16.407848       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0904 06:09:16.407920       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0904 06:09:16.408822       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0904 06:09:16.408868       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0904 06:09:16.410010       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0904 06:09:16.410040       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0904 06:09:16.411211       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0904 06:09:16.413836       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0904 06:09:16.413893       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0904 06:09:16.413903       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0904 06:09:16.413910       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0904 06:09:16.418117       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0904 06:09:16.419273       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0904 06:09:16.422492       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0904 06:10:00.610626       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0904 06:10:00.614230       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0904 06:10:00.619396       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0904 06:10:00.621684       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0904 06:10:00.622874       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0904 06:10:00.628629       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [45872c4231d69b904a50bf2a2ac35a872281751f36488d3a2110ba796c7a7ce7] <==
	I0904 06:09:14.227843       1 server_linux.go:53] "Using iptables proxy"
	I0904 06:09:14.401202       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0904 06:09:14.502062       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0904 06:09:14.502101       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0904 06:09:14.502195       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0904 06:09:14.522854       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0904 06:09:14.522930       1 server_linux.go:132] "Using iptables Proxier"
	I0904 06:09:14.527433       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0904 06:09:14.527834       1 server.go:527] "Version info" version="v1.34.0"
	I0904 06:09:14.527867       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 06:09:14.529193       1 config.go:309] "Starting node config controller"
	I0904 06:09:14.529209       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0904 06:09:14.529222       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0904 06:09:14.529234       1 config.go:403] "Starting serviceCIDR config controller"
	I0904 06:09:14.529251       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0904 06:09:14.529279       1 config.go:200] "Starting service config controller"
	I0904 06:09:14.529296       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0904 06:09:14.529326       1 config.go:106] "Starting endpoint slice config controller"
	I0904 06:09:14.529334       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0904 06:09:14.629421       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0904 06:09:14.629439       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0904 06:09:14.629474       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [b0d09719dc3eb2ce5cf65eee533348c952b4442eb36dbe71aca67cb3db821ec2] <==
	I0904 06:08:27.204152       1 server_linux.go:53] "Using iptables proxy"
	I0904 06:08:27.501171       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0904 06:08:27.502485       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-856205&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I0904 06:08:30.401582       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0904 06:08:30.401694       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0904 06:08:30.401803       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0904 06:08:30.524274       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0904 06:08:30.524346       1 server_linux.go:132] "Using iptables Proxier"
	I0904 06:08:30.529415       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0904 06:08:30.529752       1 server.go:527] "Version info" version="v1.34.0"
	I0904 06:08:30.529784       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 06:08:30.530852       1 config.go:403] "Starting serviceCIDR config controller"
	I0904 06:08:30.530928       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0904 06:08:30.530996       1 config.go:200] "Starting service config controller"
	I0904 06:08:30.531025       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0904 06:08:30.531004       1 config.go:309] "Starting node config controller"
	I0904 06:08:30.531082       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0904 06:08:30.531113       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0904 06:08:30.531078       1 config.go:106] "Starting endpoint slice config controller"
	I0904 06:08:30.531171       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0904 06:08:30.631549       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0904 06:08:30.631564       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0904 06:08:30.631561       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c0d9825afa3673c093e8ac2051065915ff7fe26a83895a28e12d57c379eb37f4] <==
	I0904 06:09:11.512737       1 serving.go:386] Generated self-signed cert in-memory
	W0904 06:09:13.000426       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0904 06:09:13.000546       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0904 06:09:13.000587       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0904 06:09:13.000622       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0904 06:09:13.107695       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0904 06:09:13.107830       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 06:09:13.110528       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 06:09:13.110568       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 06:09:13.110887       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0904 06:09:13.111127       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0904 06:09:13.210786       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [c7b52abe071551817e7341b649d02c35c8999ecd9204707fbf108471d035f12b] <==
	I0904 06:08:28.115146       1 serving.go:386] Generated self-signed cert in-memory
	W0904 06:08:30.213934       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0904 06:08:30.214075       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0904 06:08:30.214119       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0904 06:08:30.214157       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0904 06:08:30.306948       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0904 06:08:30.312180       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 06:08:30.315631       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0904 06:08:30.316255       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 06:08:30.316286       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 06:08:30.316311       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0904 06:08:30.416807       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 06:08:54.156616       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0904 06:08:54.156729       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0904 06:08:54.156916       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0904 06:08:54.156958       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0904 06:08:54.157003       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 04 06:14:08 functional-856205 kubelet[5362]: E0904 06:14:08.755693    5362 manager.go:1116] Failed to create existing container: /docker/d7c15d351271d6cababc6a5b92dbedfd65c7a93b1ef81d8e4091b33f9381be75/crio-de4817ee09ece0bbc4f15623a54e21e3ae1d5b38b5086974bacf11c3f82577ab: Error finding container de4817ee09ece0bbc4f15623a54e21e3ae1d5b38b5086974bacf11c3f82577ab: Status 404 returned error can't find the container with id de4817ee09ece0bbc4f15623a54e21e3ae1d5b38b5086974bacf11c3f82577ab
	Sep 04 06:14:08 functional-856205 kubelet[5362]: E0904 06:14:08.803960    5362 container_manager_linux.go:562] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /docker/d7c15d351271d6cababc6a5b92dbedfd65c7a93b1ef81d8e4091b33f9381be75, memory: /docker/d7c15d351271d6cababc6a5b92dbedfd65c7a93b1ef81d8e4091b33f9381be75/system.slice/kubelet.service"
	Sep 04 06:14:08 functional-856205 kubelet[5362]: E0904 06:14:08.853732    5362 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756966448853458160  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:292547}  inodes_used:{value:128}}"
	Sep 04 06:14:08 functional-856205 kubelet[5362]: E0904 06:14:08.853773    5362 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756966448853458160  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:292547}  inodes_used:{value:128}}"
	Sep 04 06:14:11 functional-856205 kubelet[5362]: E0904 06:14:11.616661    5362 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-7pjg2" podUID="6391c560-49d2-4412-a202-0640a3dbb40c"
	Sep 04 06:14:18 functional-856205 kubelet[5362]: E0904 06:14:18.855382    5362 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756966458855157857  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:292547}  inodes_used:{value:128}}"
	Sep 04 06:14:18 functional-856205 kubelet[5362]: E0904 06:14:18.855421    5362 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756966458855157857  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:292547}  inodes_used:{value:128}}"
	Sep 04 06:14:19 functional-856205 kubelet[5362]: E0904 06:14:19.617681    5362 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lgmr4" podUID="ea2824e3-6ba9-44da-9ade-87db9d77804d"
	Sep 04 06:14:20 functional-856205 kubelet[5362]: E0904 06:14:20.616016    5362 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-ls6vq" podUID="7e592927-ac45-4888-854f-fe1c3d72a5b9"
	Sep 04 06:14:26 functional-856205 kubelet[5362]: E0904 06:14:26.616154    5362 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-7pjg2" podUID="6391c560-49d2-4412-a202-0640a3dbb40c"
	Sep 04 06:14:28 functional-856205 kubelet[5362]: E0904 06:14:28.857018    5362 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756966468856787763  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:292547}  inodes_used:{value:128}}"
	Sep 04 06:14:28 functional-856205 kubelet[5362]: E0904 06:14:28.857057    5362 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756966468856787763  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:292547}  inodes_used:{value:128}}"
	Sep 04 06:14:31 functional-856205 kubelet[5362]: E0904 06:14:31.616798    5362 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-ls6vq" podUID="7e592927-ac45-4888-854f-fe1c3d72a5b9"
	Sep 04 06:14:31 functional-856205 kubelet[5362]: E0904 06:14:31.617658    5362 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lgmr4" podUID="ea2824e3-6ba9-44da-9ade-87db9d77804d"
	Sep 04 06:14:38 functional-856205 kubelet[5362]: E0904 06:14:38.858421    5362 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756966478858182319  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:292547}  inodes_used:{value:128}}"
	Sep 04 06:14:38 functional-856205 kubelet[5362]: E0904 06:14:38.858462    5362 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756966478858182319  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:292547}  inodes_used:{value:128}}"
	Sep 04 06:14:39 functional-856205 kubelet[5362]: E0904 06:14:39.616332    5362 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-7pjg2" podUID="6391c560-49d2-4412-a202-0640a3dbb40c"
	Sep 04 06:14:43 functional-856205 kubelet[5362]: E0904 06:14:43.616870    5362 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lgmr4" podUID="ea2824e3-6ba9-44da-9ade-87db9d77804d"
	Sep 04 06:14:44 functional-856205 kubelet[5362]: E0904 06:14:44.616343    5362 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-ls6vq" podUID="7e592927-ac45-4888-854f-fe1c3d72a5b9"
	Sep 04 06:14:48 functional-856205 kubelet[5362]: E0904 06:14:48.860012    5362 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756966488859753740  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:292547}  inodes_used:{value:128}}"
	Sep 04 06:14:48 functional-856205 kubelet[5362]: E0904 06:14:48.860043    5362 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756966488859753740  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:292547}  inodes_used:{value:128}}"
	Sep 04 06:14:50 functional-856205 kubelet[5362]: E0904 06:14:50.616447    5362 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-7pjg2" podUID="6391c560-49d2-4412-a202-0640a3dbb40c"
	Sep 04 06:14:57 functional-856205 kubelet[5362]: E0904 06:14:57.616043    5362 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-ls6vq" podUID="7e592927-ac45-4888-854f-fe1c3d72a5b9"
	Sep 04 06:14:58 functional-856205 kubelet[5362]: E0904 06:14:58.861621    5362 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756966498861395230  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:292547}  inodes_used:{value:128}}"
	Sep 04 06:14:58 functional-856205 kubelet[5362]: E0904 06:14:58.861655    5362 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756966498861395230  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:292547}  inodes_used:{value:128}}"
	
	
	==> storage-provisioner [3e2dde27e07f96a2079f4790c7b3e6f19701050a5abd7bd9ba621dcb0d292972] <==
	I0904 06:08:41.672837       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0904 06:08:41.680104       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0904 06:08:41.680155       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0904 06:08:41.682117       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:08:45.137444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:08:49.398315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:08:52.997320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ec50b019b4219ab268ee82e4dc5fdda783969369bc91e1cbc5819625d10f931e] <==
	W0904 06:14:34.947167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:14:36.950076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:14:36.954074       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:14:38.956982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:14:38.962012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:14:40.964671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:14:40.969360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:14:42.972272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:14:42.976038       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:14:44.979712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:14:44.984475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:14:46.987845       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:14:46.991589       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:14:48.994938       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:14:48.998626       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:14:51.002090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:14:51.006380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:14:53.008937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:14:53.014068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:14:55.016894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:14:55.020520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:14:57.023556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:14:57.028525       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:14:59.031464       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:14:59.035382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-856205 -n functional-856205
helpers_test.go:269: (dbg) Run:  kubectl --context functional-856205 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-7pjg2 hello-node-connect-7d85dfc575-ls6vq kubernetes-dashboard-855c9754f9-lgmr4
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-856205 describe pod busybox-mount hello-node-75c85bcc94-7pjg2 hello-node-connect-7d85dfc575-ls6vq kubernetes-dashboard-855c9754f9-lgmr4
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-856205 describe pod busybox-mount hello-node-75c85bcc94-7pjg2 hello-node-connect-7d85dfc575-ls6vq kubernetes-dashboard-855c9754f9-lgmr4: exit status 1 (75.951569ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-856205/192.168.49.2
	Start Time:       Thu, 04 Sep 2025 06:09:49 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  mount-munger:
	    Container ID:  cri-o://e406a5a33e982ad43aa7197e3b6ac43312297a9a5019dca167e6bc280a114eb0
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 04 Sep 2025 06:09:52 +0000
	      Finished:     Thu, 04 Sep 2025 06:09:52 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s8vmj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-s8vmj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m12s  default-scheduler  Successfully assigned default/busybox-mount to functional-856205
	  Normal  Pulling    5m11s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m9s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.092s (2.092s including waiting). Image size: 4631262 bytes.
	  Normal  Created    5m9s   kubelet            Created container: mount-munger
	  Normal  Started    5m9s   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-7pjg2
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-856205/192.168.49.2
	Start Time:       Thu, 04 Sep 2025 06:09:31 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-njpm7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-njpm7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  5m30s                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-7pjg2 to functional-856205
	  Normal   Pulling    2m14s (x5 over 5m29s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     2m14s (x5 over 5m29s)  kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     2m14s (x5 over 5m29s)  kubelet            Error: ErrImagePull
	  Warning  Failed     22s (x20 over 5m29s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    11s (x21 over 5m29s)   kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-ls6vq
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-856205/192.168.49.2
	Start Time:       Thu, 04 Sep 2025 06:10:05 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.12
	IPs:
	  IP:           10.244.0.12
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nn6w6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-nn6w6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  4m56s                 default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-ls6vq to functional-856205
	  Normal   Pulling    67s (x5 over 4m56s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     67s (x5 over 4m29s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     67s (x5 over 4m29s)   kubelet            Error: ErrImagePull
	  Warning  Failed     17s (x15 over 4m29s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4s (x16 over 4m29s)   kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-lgmr4" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-856205 describe pod busybox-mount hello-node-75c85bcc94-7pjg2 hello-node-connect-7d85dfc575-ls6vq kubernetes-dashboard-855c9754f9-lgmr4: exit status 1
E0904 06:17:57.337117 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- FAIL: TestFunctional/parallel/DashboardCmd (302.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-856205 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-856205 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-ls6vq" [7e592927-ac45-4888-854f-fe1c3d72a5b9] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-856205 -n functional-856205
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-09-04 06:20:05.476848737 +0000 UTC m=+1196.265879358
functional_test.go:1645: (dbg) Run:  kubectl --context functional-856205 describe po hello-node-connect-7d85dfc575-ls6vq -n default
functional_test.go:1645: (dbg) kubectl --context functional-856205 describe po hello-node-connect-7d85dfc575-ls6vq -n default:
Name:             hello-node-connect-7d85dfc575-ls6vq
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-856205/192.168.49.2
Start Time:       Thu, 04 Sep 2025 06:10:05 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.12
IPs:
IP:           10.244.0.12
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nn6w6 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-nn6w6:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-ls6vq to functional-856205
Normal   Pulling    6m11s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m11s (x5 over 9m33s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     6m11s (x5 over 9m33s)   kubelet            Error: ErrImagePull
Warning  Failed     4m31s (x19 over 9m33s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m5s (x21 over 9m33s)   kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-856205 logs hello-node-connect-7d85dfc575-ls6vq -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-856205 logs hello-node-connect-7d85dfc575-ls6vq -n default: exit status 1 (59.215695ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-ls6vq" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-856205 logs hello-node-connect-7d85dfc575-ls6vq -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-856205 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-ls6vq
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-856205/192.168.49.2
Start Time:       Thu, 04 Sep 2025 06:10:05 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.12
IPs:
IP:           10.244.0.12
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nn6w6 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-nn6w6:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-ls6vq to functional-856205
Normal   Pulling    6m11s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m11s (x5 over 9m33s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     6m11s (x5 over 9m33s)   kubelet            Error: ErrImagePull
Warning  Failed     4m31s (x19 over 9m33s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m5s (x21 over 9m33s)   kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-856205 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-856205 logs -l app=hello-node-connect: exit status 1 (60.307215ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-ls6vq" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-856205 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-856205 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.108.138.218
IPs:                      10.108.138.218
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31592/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-856205
helpers_test.go:243: (dbg) docker inspect functional-856205:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d7c15d351271d6cababc6a5b92dbedfd65c7a93b1ef81d8e4091b33f9381be75",
	        "Created": "2025-09-04T06:07:11.035642359Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1546419,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-04T06:07:11.067723153Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6f7d8b3ae805e64eb4efe058a75d43d384fe5989473cee7f8e24ea90eca28309",
	        "ResolvConfPath": "/var/lib/docker/containers/d7c15d351271d6cababc6a5b92dbedfd65c7a93b1ef81d8e4091b33f9381be75/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d7c15d351271d6cababc6a5b92dbedfd65c7a93b1ef81d8e4091b33f9381be75/hostname",
	        "HostsPath": "/var/lib/docker/containers/d7c15d351271d6cababc6a5b92dbedfd65c7a93b1ef81d8e4091b33f9381be75/hosts",
	        "LogPath": "/var/lib/docker/containers/d7c15d351271d6cababc6a5b92dbedfd65c7a93b1ef81d8e4091b33f9381be75/d7c15d351271d6cababc6a5b92dbedfd65c7a93b1ef81d8e4091b33f9381be75-json.log",
	        "Name": "/functional-856205",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-856205:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-856205",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d7c15d351271d6cababc6a5b92dbedfd65c7a93b1ef81d8e4091b33f9381be75",
	                "LowerDir": "/var/lib/docker/overlay2/d33139dae79187bfd277f343bc0f354a677e09aebbbb6dfcaf24e951d2156502-init/diff:/var/lib/docker/overlay2/00af8677cb60c76ca825d07bd2d1267a5f0b2d8d1147a86a8eb7a1b8e0189af8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d33139dae79187bfd277f343bc0f354a677e09aebbbb6dfcaf24e951d2156502/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d33139dae79187bfd277f343bc0f354a677e09aebbbb6dfcaf24e951d2156502/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d33139dae79187bfd277f343bc0f354a677e09aebbbb6dfcaf24e951d2156502/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-856205",
	                "Source": "/var/lib/docker/volumes/functional-856205/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-856205",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-856205",
	                "name.minikube.sigs.k8s.io": "functional-856205",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5c84ddeed4ee403c03a5ccfb87a054e52dcc4b1db8fd2bf7b8979144a3f5519e",
	            "SandboxKey": "/var/run/docker/netns/5c84ddeed4ee",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33969"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33970"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33973"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33971"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33972"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-856205": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:af:b8:75:73:1a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8db23358731e7cff5328ad69c486affb0ff9c40289b5dbaf6ead93c1165a1548",
	                    "EndpointID": "9a0d1cc729f0f792a8fa4a51b8bda2614262a3339ee4c316798bc1db23df5daf",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-856205",
	                        "d7c15d351271"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-856205 -n functional-856205
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-856205 logs -n 25: (1.363954049s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                   ARGS                                                    │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-856205 ssh findmnt -T /mount2                                                                  │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:09 UTC │ 04 Sep 25 06:09 UTC │
	│ tunnel         │ functional-856205 tunnel --alsologtostderr                                                                │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:09 UTC │                     │
	│ ssh            │ functional-856205 ssh findmnt -T /mount3                                                                  │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:09 UTC │ 04 Sep 25 06:09 UTC │
	│ mount          │ -p functional-856205 --kill=true                                                                          │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:09 UTC │                     │
	│ addons         │ functional-856205 addons list                                                                             │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:09 UTC │ 04 Sep 25 06:09 UTC │
	│ addons         │ functional-856205 addons list -o json                                                                     │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:09 UTC │ 04 Sep 25 06:09 UTC │
	│ start          │ -p functional-856205 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:09 UTC │                     │
	│ start          │ -p functional-856205 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio           │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:09 UTC │                     │
	│ start          │ -p functional-856205 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:09 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-856205 --alsologtostderr -v=1                                            │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:09 UTC │                     │
	│ update-context │ functional-856205 update-context --alsologtostderr -v=2                                                   │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:10 UTC │ 04 Sep 25 06:10 UTC │
	│ update-context │ functional-856205 update-context --alsologtostderr -v=2                                                   │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:10 UTC │ 04 Sep 25 06:10 UTC │
	│ update-context │ functional-856205 update-context --alsologtostderr -v=2                                                   │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:10 UTC │ 04 Sep 25 06:10 UTC │
	│ image          │ functional-856205 image ls --format short --alsologtostderr                                               │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:10 UTC │ 04 Sep 25 06:10 UTC │
	│ image          │ functional-856205 image ls --format yaml --alsologtostderr                                                │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:10 UTC │ 04 Sep 25 06:10 UTC │
	│ ssh            │ functional-856205 ssh pgrep buildkitd                                                                     │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:10 UTC │                     │
	│ image          │ functional-856205 image build -t localhost/my-image:functional-856205 testdata/build --alsologtostderr    │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:10 UTC │ 04 Sep 25 06:10 UTC │
	│ image          │ functional-856205 image ls                                                                                │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:10 UTC │ 04 Sep 25 06:10 UTC │
	│ image          │ functional-856205 image ls --format json --alsologtostderr                                                │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:10 UTC │ 04 Sep 25 06:10 UTC │
	│ image          │ functional-856205 image ls --format table --alsologtostderr                                               │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:10 UTC │ 04 Sep 25 06:10 UTC │
	│ service        │ functional-856205 service list                                                                            │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:19 UTC │ 04 Sep 25 06:19 UTC │
	│ service        │ functional-856205 service list -o json                                                                    │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:19 UTC │ 04 Sep 25 06:19 UTC │
	│ service        │ functional-856205 service --namespace=default --https --url hello-node                                    │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:19 UTC │                     │
	│ service        │ functional-856205 service hello-node --url --format={{.IP}}                                               │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:19 UTC │                     │
	│ service        │ functional-856205 service hello-node --url                                                                │ functional-856205 │ jenkins │ v1.36.0 │ 04 Sep 25 06:19 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/04 06:09:59
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 06:09:59.010801 1562791 out.go:360] Setting OutFile to fd 1 ...
	I0904 06:09:59.011057 1562791 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:09:59.011068 1562791 out.go:374] Setting ErrFile to fd 2...
	I0904 06:09:59.011074 1562791 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:09:59.011411 1562791 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1516970/.minikube/bin
	I0904 06:09:59.011988 1562791 out.go:368] Setting JSON to false
	I0904 06:09:59.013235 1562791 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":13949,"bootTime":1756952250,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 06:09:59.013348 1562791 start.go:140] virtualization: kvm guest
	I0904 06:09:59.014926 1562791 out.go:179] * [functional-856205] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	I0904 06:09:59.016529 1562791 out.go:179]   - MINIKUBE_LOCATION=21409
	I0904 06:09:59.016531 1562791 notify.go:220] Checking for updates...
	I0904 06:09:59.019540 1562791 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 06:09:59.021121 1562791 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-1516970/kubeconfig
	I0904 06:09:59.022435 1562791 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1516970/.minikube
	I0904 06:09:59.023708 1562791 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 06:09:59.024941 1562791 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 06:09:59.026499 1562791 config.go:182] Loaded profile config "functional-856205": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 06:09:59.027039 1562791 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 06:09:59.051784 1562791 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0904 06:09:59.051905 1562791 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 06:09:59.107671 1562791 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-09-04 06:09:59.09700648 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 06:09:59.107787 1562791 docker.go:318] overlay module found
	I0904 06:09:59.110288 1562791 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0904 06:09:59.111655 1562791 start.go:304] selected driver: docker
	I0904 06:09:59.111672 1562791 start.go:918] validating driver "docker" against &{Name:functional-856205 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-856205 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 06:09:59.111789 1562791 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 06:09:59.114592 1562791 out.go:203] 
	W0904 06:09:59.115941 1562791 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0904 06:09:59.117320 1562791 out.go:203] 
	
	
	==> CRI-O <==
	Sep 04 06:17:21 functional-856205 crio[4996]: time="2025-09-04 06:17:21.616889114Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=a676c51c-7b83-4a98-ad97-8fb22c81f1b8 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:17:34 functional-856205 crio[4996]: time="2025-09-04 06:17:34.616355698Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=50309346-5f73-4469-accb-a1044d884091 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:17:34 functional-856205 crio[4996]: time="2025-09-04 06:17:34.616601598Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=50309346-5f73-4469-accb-a1044d884091 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:17:48 functional-856205 crio[4996]: time="2025-09-04 06:17:48.616645854Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=c53ccca4-b00c-45f7-b5d1-fef015326bdf name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:17:48 functional-856205 crio[4996]: time="2025-09-04 06:17:48.616930300Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=c53ccca4-b00c-45f7-b5d1-fef015326bdf name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:17:59 functional-856205 crio[4996]: time="2025-09-04 06:17:59.616662997Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=6c4c2746-8830-4273-af1a-ca8cee9a672a name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:17:59 functional-856205 crio[4996]: time="2025-09-04 06:17:59.616987795Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=6c4c2746-8830-4273-af1a-ca8cee9a672a name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:18:10 functional-856205 crio[4996]: time="2025-09-04 06:18:10.616638452Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=6036e562-7e64-474c-9f0a-301f9f685301 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:18:10 functional-856205 crio[4996]: time="2025-09-04 06:18:10.616923391Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=6036e562-7e64-474c-9f0a-301f9f685301 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:18:10 functional-856205 crio[4996]: time="2025-09-04 06:18:10.617594931Z" level=info msg="Pulling image: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=653bc660-5466-4919-89a5-b6c3e3f73e16 name=/runtime.v1.ImageService/PullImage
	Sep 04 06:18:10 functional-856205 crio[4996]: time="2025-09-04 06:18:10.622660936Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 04 06:18:52 functional-856205 crio[4996]: time="2025-09-04 06:18:52.616852446Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=289a107f-7fd5-4978-ab86-9fcf1367ca99 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:18:52 functional-856205 crio[4996]: time="2025-09-04 06:18:52.617221486Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=289a107f-7fd5-4978-ab86-9fcf1367ca99 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:19:04 functional-856205 crio[4996]: time="2025-09-04 06:19:04.616888727Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=d03428c8-2bd4-4a6c-ae71-bf2857b83d5e name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:19:04 functional-856205 crio[4996]: time="2025-09-04 06:19:04.617202331Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=d03428c8-2bd4-4a6c-ae71-bf2857b83d5e name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:19:15 functional-856205 crio[4996]: time="2025-09-04 06:19:15.616574673Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=854992a9-e7ec-4776-a9f8-9a694297c58e name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:19:15 functional-856205 crio[4996]: time="2025-09-04 06:19:15.616862632Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=854992a9-e7ec-4776-a9f8-9a694297c58e name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:19:26 functional-856205 crio[4996]: time="2025-09-04 06:19:26.616585844Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=4176ada4-79a3-4290-a48c-865e195fa471 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:19:26 functional-856205 crio[4996]: time="2025-09-04 06:19:26.616840935Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=4176ada4-79a3-4290-a48c-865e195fa471 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:19:39 functional-856205 crio[4996]: time="2025-09-04 06:19:39.616633188Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=3f0bb478-f3be-4bd6-9fb2-f7f1339b0451 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:19:39 functional-856205 crio[4996]: time="2025-09-04 06:19:39.616924287Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=3f0bb478-f3be-4bd6-9fb2-f7f1339b0451 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:19:51 functional-856205 crio[4996]: time="2025-09-04 06:19:51.616211634Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=de86ea3e-ab4d-4623-9766-691a41bca650 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:19:51 functional-856205 crio[4996]: time="2025-09-04 06:19:51.616561870Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=de86ea3e-ab4d-4623-9766-691a41bca650 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:20:03 functional-856205 crio[4996]: time="2025-09-04 06:20:03.616479450Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=a8147161-93e2-40cb-823e-8752bac3cc82 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:20:03 functional-856205 crio[4996]: time="2025-09-04 06:20:03.617350685Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=a8147161-93e2-40cb-823e-8752bac3cc82 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	6489ddfca5443       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   10 minutes ago      Running             dashboard-metrics-scraper   0                   9552e723ee0e5       dashboard-metrics-scraper-77bf4d6c4c-sbxr7
	0f7539f2504e3       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                  10 minutes ago      Running             nginx                       0                   a6f815279342e       nginx-svc
	d9bb5da3ea85f       docker.io/library/nginx@sha256:33e0bbc7ca9ecf108140af6288c7c9d1ecc77548cbfd3952fd8466a75edefe57                  10 minutes ago      Running             myfrontend                  0                   d9ac997ceed30       sp-pod
	e406a5a33e982       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              10 minutes ago      Exited              mount-munger                0                   ead61ea1a7ecd       busybox-mount
	7a3de8ebe3c94       docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb                  10 minutes ago      Running             mysql                       0                   c2e45d80d5c54       mysql-5bb876957f-vfcgg
	a4542298855b7       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 10 minutes ago      Running             coredns                     2                   db9b14ee5303e       coredns-66bc5c9577-qt799
	45872c4231d69       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                 10 minutes ago      Running             kube-proxy                  2                   de4817ee09ece       kube-proxy-9d6ws
	1f7565fa89e33       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 10 minutes ago      Running             kindnet-cni                 2                   26c3fed2a9856       kindnet-2788m
	ec50b019b4219       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         3                   97f3c489ed1d9       storage-provisioner
	771936d22e57c       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                 10 minutes ago      Running             kube-apiserver              0                   d13b4a409b186       kube-apiserver-functional-856205
	4d6b7b0accf43       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                 10 minutes ago      Running             kube-controller-manager     2                   d4ae4d74e4d05       kube-controller-manager-functional-856205
	c0d9825afa367       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                 10 minutes ago      Running             kube-scheduler              2                   76cab2c03a837       kube-scheduler-functional-856205
	5c27ea60586da       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 10 minutes ago      Running             etcd                        2                   a9e72a325305a       etcd-functional-856205
	3e2dde27e07f9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         2                   97f3c489ed1d9       storage-provisioner
	c7b52abe07155       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                 11 minutes ago      Exited              kube-scheduler              1                   76cab2c03a837       kube-scheduler-functional-856205
	3d296839d66fd       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                 11 minutes ago      Exited              kube-controller-manager     1                   d4ae4d74e4d05       kube-controller-manager-functional-856205
	4f71ad57d461e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 11 minutes ago      Exited              etcd                        1                   a9e72a325305a       etcd-functional-856205
	77e8aa8d9f2ad       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     1                   db9b14ee5303e       coredns-66bc5c9577-qt799
	13225950240b3       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Exited              kindnet-cni                 1                   26c3fed2a9856       kindnet-2788m
	b0d09719dc3eb       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                 11 minutes ago      Exited              kube-proxy                  1                   de4817ee09ece       kube-proxy-9d6ws
	
	
	==> coredns [77e8aa8d9f2ad6c6e7de599be18ce68e6ceffd0d1b64154b30871700d4ac685c] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58497 - 40261 "HINFO IN 1560186451247838692.6315298664168261141. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.098624079s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a4542298855b750fd2d580ac0659afec08a2a08745fcaf5e0b9806e05251988e] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60773 - 42611 "HINFO IN 4738126952861082454.2946721953905675424. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.039442542s
	
	
	==> describe nodes <==
	Name:               functional-856205
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-856205
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3fa37de45a2901b215fab008201edf72ce5a1ff
	                    minikube.k8s.io/name=functional-856205
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_04T06_07_26_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Sep 2025 06:07:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-856205
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Sep 2025 06:20:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Sep 2025 06:17:43 +0000   Thu, 04 Sep 2025 06:07:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Sep 2025 06:17:43 +0000   Thu, 04 Sep 2025 06:07:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Sep 2025 06:17:43 +0000   Thu, 04 Sep 2025 06:07:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Sep 2025 06:17:43 +0000   Thu, 04 Sep 2025 06:08:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-856205
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 1db52e3d7b2744a7bf7c17dbd15b2b07
	  System UUID:                27d32824-d70e-4671-9d12-e8d9e33531ea
	  Boot ID:                    04ef57f1-30be-45a2-b84c-b20b1e806bda
	  Kernel Version:             5.15.0-1083-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-7pjg2                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-ls6vq           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-vfcgg                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-qt799                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-functional-856205                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-2788m                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-856205              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-856205     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-9d6ws                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-856205              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-sbxr7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-lgmr4         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-856205 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-856205 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-856205 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-856205 event: Registered Node functional-856205 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-856205 status is now: NodeReady
	  Warning  ContainerGCFailed        11m                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           11m                node-controller  Node functional-856205 event: Registered Node functional-856205 in Controller
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-856205 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-856205 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-856205 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-856205 event: Registered Node functional-856205 in Controller
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e2 e7 99 b7 01 f9 08 06
	[  +4.819792] IPv4: martian source 10.244.0.1 from 10.244.0.8, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 cb 31 e8 a7 d4 08 06
	[  +1.686116] IPv4: martian source 10.244.0.1 from 10.244.0.9, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 82 ab 22 c3 73 08 06
	[Sep 4 05:57] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2a 8e 77 75 56 51 08 06
	[  +0.292319] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2a 8e 77 75 56 51 08 06
	[ +25.895647] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 66 06 76 0b 88 08 06
	[Sep 4 06:03] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: d2 f6 9d 6b d4 8b 4a 61 0e 7c 94 34 08 00
	[  +1.006977] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: d2 f6 9d 6b d4 8b 4a 61 0e 7c 94 34 08 00
	[  +2.011803] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: d2 f6 9d 6b d4 8b 4a 61 0e 7c 94 34 08 00
	[  +4.255528] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d2 f6 9d 6b d4 8b 4a 61 0e 7c 94 34 08 00
	[Sep 4 06:04] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: d2 f6 9d 6b d4 8b 4a 61 0e 7c 94 34 08 00
	[ +16.126348] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000031] ll header: 00000000: d2 f6 9d 6b d4 8b 4a 61 0e 7c 94 34 08 00
	[ +34.044412] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d2 f6 9d 6b d4 8b 4a 61 0e 7c 94 34 08 00
	
	
	==> etcd [4f71ad57d461e7d38bfc166708d83f44b726e784b31b29b755d2135cf0e7d00f] <==
	{"level":"warn","ts":"2025-09-04T06:08:29.435633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:08:29.442279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:08:29.448058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:08:29.505154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:08:29.511319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:08:29.517631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:08:29.612813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53042","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-04T06:08:54.152985Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-04T06:08:54.153075Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-856205","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-04T06:08:54.153350Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-04T06:08:54.305754Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-04T06:08:54.305840Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-04T06:08:54.305876Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-09-04T06:08:54.305992Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-09-04T06:08:54.305952Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-04T06:08:54.306027Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-04T06:08:54.306045Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-04T06:08:54.305976Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-04T06:08:54.306066Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-04T06:08:54.306072Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-04T06:08:54.305994Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-09-04T06:08:54.309325Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-04T06:08:54.309394Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-04T06:08:54.309426Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-04T06:08:54.309439Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-856205","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [5c27ea60586daf8c59a7871c3ab63bea1f170435672da8f4573c0e3052de96a1] <==
	{"level":"warn","ts":"2025-09-04T06:09:12.110816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:09:12.124258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:09:12.130684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:09:12.138820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:09:12.145010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:09:12.150831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:09:12.204146Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:09:12.231978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:09:12.239459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:09:12.246153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:09:12.252862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:09:12.259026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:09:12.265159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:09:12.272361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:09:12.300107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:09:12.307305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:09:12.313801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:09:12.320043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:09:12.325912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:09:12.354537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:09:12.361982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:09:12.368252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57618","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-04T06:19:11.516101Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1230}
	{"level":"info","ts":"2025-09-04T06:19:11.535383Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1230,"took":"18.874514ms","hash":1270546469,"current-db-size-bytes":3727360,"current-db-size":"3.7 MB","current-db-size-in-use-bytes":1830912,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-09-04T06:19:11.535428Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1270546469,"revision":1230,"compact-revision":-1}
	
	
	==> kernel <==
	 06:20:07 up  4:02,  0 users,  load average: 0.16, 0.40, 1.43
	Linux functional-856205 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [13225950240b3534b25d0e7c54e06fd2eb4d6e6d1b64e65029efc6b789a8280f] <==
	I0904 06:08:27.205059       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0904 06:08:27.205305       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0904 06:08:27.205496       1 main.go:148] setting mtu 1500 for CNI 
	I0904 06:08:27.205515       1 main.go:178] kindnetd IP family: "ipv4"
	I0904 06:08:27.205529       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-04T06:08:27Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0904 06:08:27.506040       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0904 06:08:27.506123       1 controller.go:381] "Waiting for informer caches to sync"
	I0904 06:08:27.506158       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0904 06:08:27.506373       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0904 06:08:30.407353       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0904 06:08:30.407394       1 metrics.go:72] Registering metrics
	I0904 06:08:30.407472       1 controller.go:711] "Syncing nftables rules"
	I0904 06:08:37.505928       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 06:08:37.506005       1 main.go:301] handling current node
	I0904 06:08:47.506772       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 06:08:47.506803       1 main.go:301] handling current node
	
	
	==> kindnet [1f7565fa89e33e274beff75f61697f2706ecaacd99e7049a0facb500f82ddfc8] <==
	I0904 06:18:04.600530       1 main.go:301] handling current node
	I0904 06:18:14.601652       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 06:18:14.601695       1 main.go:301] handling current node
	I0904 06:18:24.607904       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 06:18:24.607940       1 main.go:301] handling current node
	I0904 06:18:34.607880       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 06:18:34.607922       1 main.go:301] handling current node
	I0904 06:18:44.601922       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 06:18:44.601952       1 main.go:301] handling current node
	I0904 06:18:54.601059       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 06:18:54.601120       1 main.go:301] handling current node
	I0904 06:19:04.602849       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 06:19:04.602914       1 main.go:301] handling current node
	I0904 06:19:14.600737       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 06:19:14.600813       1 main.go:301] handling current node
	I0904 06:19:24.600990       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 06:19:24.601020       1 main.go:301] handling current node
	I0904 06:19:34.607919       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 06:19:34.607962       1 main.go:301] handling current node
	I0904 06:19:44.602449       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 06:19:44.602500       1 main.go:301] handling current node
	I0904 06:19:54.607872       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 06:19:54.607908       1 main.go:301] handling current node
	I0904 06:20:04.602524       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0904 06:20:04.602572       1 main.go:301] handling current node
	
	
	==> kube-apiserver [771936d22e57c182d02189a3ee4f00cb34c26beeea34a37be8a97c913b60d937] <==
	E0904 06:09:56.644826       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:54436: use of closed network connection
	E0904 06:09:56.807508       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:54456: use of closed network connection
	I0904 06:09:58.106138       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.97.85.195"}
	I0904 06:10:00.527352       1 controller.go:667] quota admission added evaluator for: namespaces
	I0904 06:10:00.739362       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.97.11"}
	I0904 06:10:00.814359       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.122.216"}
	E0904 06:10:05.051876       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:41252: use of closed network connection
	I0904 06:10:05.175448       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.108.138.218"}
	I0904 06:10:22.612146       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 06:10:30.518252       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 06:11:40.797091       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 06:11:55.512132       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 06:13:05.595428       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 06:13:08.798259       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 06:14:15.654594       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 06:14:19.685760       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 06:15:28.099706       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 06:15:40.062778       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 06:16:54.198564       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 06:16:58.373844       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 06:18:09.476913       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 06:18:12.417493       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 06:19:10.046287       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 06:19:13.023570       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0904 06:19:29.295126       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [3d296839d66fd9e6b36eed5dd5fc6ad1490e30223f583a07c9669caae39b0c0a] <==
	I0904 06:08:33.618566       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0904 06:08:33.618610       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0904 06:08:33.618638       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0904 06:08:33.618718       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0904 06:08:33.619838       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0904 06:08:33.621289       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0904 06:08:33.622570       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0904 06:08:33.622671       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0904 06:08:33.624521       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0904 06:08:33.626639       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0904 06:08:33.627842       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0904 06:08:33.630081       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0904 06:08:33.631273       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0904 06:08:33.633509       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0904 06:08:33.633618       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0904 06:08:33.634793       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0904 06:08:33.634814       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0904 06:08:33.637021       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0904 06:08:33.637126       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0904 06:08:33.637221       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-856205"
	I0904 06:08:33.637274       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0904 06:08:33.638134       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0904 06:08:33.640487       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0904 06:08:33.641689       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0904 06:08:33.641843       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [4d6b7b0accf43ca39608d20a14d951cebccb03971943583d29c042a956466383] <==
	I0904 06:09:16.407208       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0904 06:09:16.407268       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0904 06:09:16.407563       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0904 06:09:16.407694       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0904 06:09:16.407777       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-856205"
	I0904 06:09:16.407848       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0904 06:09:16.407920       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0904 06:09:16.408822       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0904 06:09:16.408868       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0904 06:09:16.410010       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0904 06:09:16.410040       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0904 06:09:16.411211       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0904 06:09:16.413836       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0904 06:09:16.413893       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0904 06:09:16.413903       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0904 06:09:16.413910       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0904 06:09:16.418117       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0904 06:09:16.419273       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0904 06:09:16.422492       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0904 06:10:00.610626       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0904 06:10:00.614230       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0904 06:10:00.619396       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0904 06:10:00.621684       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0904 06:10:00.622874       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0904 06:10:00.628629       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [45872c4231d69b904a50bf2a2ac35a872281751f36488d3a2110ba796c7a7ce7] <==
	I0904 06:09:14.227843       1 server_linux.go:53] "Using iptables proxy"
	I0904 06:09:14.401202       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0904 06:09:14.502062       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0904 06:09:14.502101       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0904 06:09:14.502195       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0904 06:09:14.522854       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0904 06:09:14.522930       1 server_linux.go:132] "Using iptables Proxier"
	I0904 06:09:14.527433       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0904 06:09:14.527834       1 server.go:527] "Version info" version="v1.34.0"
	I0904 06:09:14.527867       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 06:09:14.529193       1 config.go:309] "Starting node config controller"
	I0904 06:09:14.529209       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0904 06:09:14.529222       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0904 06:09:14.529234       1 config.go:403] "Starting serviceCIDR config controller"
	I0904 06:09:14.529251       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0904 06:09:14.529279       1 config.go:200] "Starting service config controller"
	I0904 06:09:14.529296       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0904 06:09:14.529326       1 config.go:106] "Starting endpoint slice config controller"
	I0904 06:09:14.529334       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0904 06:09:14.629421       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0904 06:09:14.629439       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0904 06:09:14.629474       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [b0d09719dc3eb2ce5cf65eee533348c952b4442eb36dbe71aca67cb3db821ec2] <==
	I0904 06:08:27.204152       1 server_linux.go:53] "Using iptables proxy"
	I0904 06:08:27.501171       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0904 06:08:27.502485       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-856205&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I0904 06:08:30.401582       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0904 06:08:30.401694       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0904 06:08:30.401803       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0904 06:08:30.524274       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0904 06:08:30.524346       1 server_linux.go:132] "Using iptables Proxier"
	I0904 06:08:30.529415       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0904 06:08:30.529752       1 server.go:527] "Version info" version="v1.34.0"
	I0904 06:08:30.529784       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 06:08:30.530852       1 config.go:403] "Starting serviceCIDR config controller"
	I0904 06:08:30.530928       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0904 06:08:30.530996       1 config.go:200] "Starting service config controller"
	I0904 06:08:30.531025       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0904 06:08:30.531004       1 config.go:309] "Starting node config controller"
	I0904 06:08:30.531082       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0904 06:08:30.531113       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0904 06:08:30.531078       1 config.go:106] "Starting endpoint slice config controller"
	I0904 06:08:30.531171       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0904 06:08:30.631549       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0904 06:08:30.631564       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0904 06:08:30.631561       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c0d9825afa3673c093e8ac2051065915ff7fe26a83895a28e12d57c379eb37f4] <==
	I0904 06:09:11.512737       1 serving.go:386] Generated self-signed cert in-memory
	W0904 06:09:13.000426       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0904 06:09:13.000546       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0904 06:09:13.000587       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0904 06:09:13.000622       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0904 06:09:13.107695       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0904 06:09:13.107830       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 06:09:13.110528       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 06:09:13.110568       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 06:09:13.110887       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0904 06:09:13.111127       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0904 06:09:13.210786       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [c7b52abe071551817e7341b649d02c35c8999ecd9204707fbf108471d035f12b] <==
	I0904 06:08:28.115146       1 serving.go:386] Generated self-signed cert in-memory
	W0904 06:08:30.213934       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0904 06:08:30.214075       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0904 06:08:30.214119       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0904 06:08:30.214157       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0904 06:08:30.306948       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0904 06:08:30.312180       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 06:08:30.315631       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0904 06:08:30.316255       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 06:08:30.316286       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 06:08:30.316311       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0904 06:08:30.416807       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 06:08:54.156616       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0904 06:08:54.156729       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0904 06:08:54.156916       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0904 06:08:54.156958       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0904 06:08:54.157003       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 04 06:19:08 functional-856205 kubelet[5362]: E0904 06:19:08.902506    5362 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756966748902240184  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:292547}  inodes_used:{value:128}}"
	Sep 04 06:19:12 functional-856205 kubelet[5362]: E0904 06:19:12.616410    5362 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-7pjg2" podUID="6391c560-49d2-4412-a202-0640a3dbb40c"
	Sep 04 06:19:12 functional-856205 kubelet[5362]: E0904 06:19:12.616426    5362 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-ls6vq" podUID="7e592927-ac45-4888-854f-fe1c3d72a5b9"
	Sep 04 06:19:15 functional-856205 kubelet[5362]: E0904 06:19:15.617190    5362 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lgmr4" podUID="ea2824e3-6ba9-44da-9ade-87db9d77804d"
	Sep 04 06:19:18 functional-856205 kubelet[5362]: E0904 06:19:18.904001    5362 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756966758903833841  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:292547}  inodes_used:{value:128}}"
	Sep 04 06:19:18 functional-856205 kubelet[5362]: E0904 06:19:18.904039    5362 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756966758903833841  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:292547}  inodes_used:{value:128}}"
	Sep 04 06:19:24 functional-856205 kubelet[5362]: E0904 06:19:24.616207    5362 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-7pjg2" podUID="6391c560-49d2-4412-a202-0640a3dbb40c"
	Sep 04 06:19:26 functional-856205 kubelet[5362]: E0904 06:19:26.616373    5362 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-ls6vq" podUID="7e592927-ac45-4888-854f-fe1c3d72a5b9"
	Sep 04 06:19:26 functional-856205 kubelet[5362]: E0904 06:19:26.617190    5362 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lgmr4" podUID="ea2824e3-6ba9-44da-9ade-87db9d77804d"
	Sep 04 06:19:28 functional-856205 kubelet[5362]: E0904 06:19:28.905468    5362 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756966768905227291  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:292547}  inodes_used:{value:128}}"
	Sep 04 06:19:28 functional-856205 kubelet[5362]: E0904 06:19:28.905501    5362 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756966768905227291  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:292547}  inodes_used:{value:128}}"
	Sep 04 06:19:37 functional-856205 kubelet[5362]: E0904 06:19:37.616220    5362 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-7pjg2" podUID="6391c560-49d2-4412-a202-0640a3dbb40c"
	Sep 04 06:19:38 functional-856205 kubelet[5362]: E0904 06:19:38.616240    5362 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-ls6vq" podUID="7e592927-ac45-4888-854f-fe1c3d72a5b9"
	Sep 04 06:19:38 functional-856205 kubelet[5362]: E0904 06:19:38.906786    5362 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756966778906544871  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:292547}  inodes_used:{value:128}}"
	Sep 04 06:19:38 functional-856205 kubelet[5362]: E0904 06:19:38.906830    5362 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756966778906544871  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:292547}  inodes_used:{value:128}}"
	Sep 04 06:19:39 functional-856205 kubelet[5362]: E0904 06:19:39.617183    5362 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lgmr4" podUID="ea2824e3-6ba9-44da-9ade-87db9d77804d"
	Sep 04 06:19:48 functional-856205 kubelet[5362]: E0904 06:19:48.908372    5362 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756966788908145841  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:292547}  inodes_used:{value:128}}"
	Sep 04 06:19:48 functional-856205 kubelet[5362]: E0904 06:19:48.908408    5362 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756966788908145841  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:292547}  inodes_used:{value:128}}"
	Sep 04 06:19:51 functional-856205 kubelet[5362]: E0904 06:19:51.615936    5362 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-7pjg2" podUID="6391c560-49d2-4412-a202-0640a3dbb40c"
	Sep 04 06:19:51 functional-856205 kubelet[5362]: E0904 06:19:51.616915    5362 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lgmr4" podUID="ea2824e3-6ba9-44da-9ade-87db9d77804d"
	Sep 04 06:19:53 functional-856205 kubelet[5362]: E0904 06:19:53.616391    5362 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-ls6vq" podUID="7e592927-ac45-4888-854f-fe1c3d72a5b9"
	Sep 04 06:19:58 functional-856205 kubelet[5362]: E0904 06:19:58.910002    5362 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756966798909787168  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:292547}  inodes_used:{value:128}}"
	Sep 04 06:19:58 functional-856205 kubelet[5362]: E0904 06:19:58.910054    5362 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756966798909787168  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:292547}  inodes_used:{value:128}}"
	Sep 04 06:20:02 functional-856205 kubelet[5362]: E0904 06:20:02.616207    5362 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-7pjg2" podUID="6391c560-49d2-4412-a202-0640a3dbb40c"
	Sep 04 06:20:03 functional-856205 kubelet[5362]: E0904 06:20:03.618516    5362 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lgmr4" podUID="ea2824e3-6ba9-44da-9ade-87db9d77804d"
	
	
	==> storage-provisioner [3e2dde27e07f96a2079f4790c7b3e6f19701050a5abd7bd9ba621dcb0d292972] <==
	I0904 06:08:41.672837       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0904 06:08:41.680104       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0904 06:08:41.680155       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0904 06:08:41.682117       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:08:45.137444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:08:49.398315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:08:52.997320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ec50b019b4219ab268ee82e4dc5fdda783969369bc91e1cbc5819625d10f931e] <==
	W0904 06:19:42.116963       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:19:44.120339       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:19:44.125116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:19:46.128308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:19:46.132142       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:19:48.135512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:19:48.140868       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:19:50.144214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:19:50.148476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:19:52.151978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:19:52.156617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:19:54.159853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:19:54.165001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:19:56.168402       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:19:56.173422       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:19:58.176545       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:19:58.180232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:20:00.183188       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:20:00.186879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:20:02.189686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:20:02.193274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:20:04.196467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:20:04.203964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:20:06.207329       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:20:06.211867       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-856205 -n functional-856205
helpers_test.go:269: (dbg) Run:  kubectl --context functional-856205 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-7pjg2 hello-node-connect-7d85dfc575-ls6vq kubernetes-dashboard-855c9754f9-lgmr4
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-856205 describe pod busybox-mount hello-node-75c85bcc94-7pjg2 hello-node-connect-7d85dfc575-ls6vq kubernetes-dashboard-855c9754f9-lgmr4
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-856205 describe pod busybox-mount hello-node-75c85bcc94-7pjg2 hello-node-connect-7d85dfc575-ls6vq kubernetes-dashboard-855c9754f9-lgmr4: exit status 1 (75.213838ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-856205/192.168.49.2
	Start Time:       Thu, 04 Sep 2025 06:09:49 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  mount-munger:
	    Container ID:  cri-o://e406a5a33e982ad43aa7197e3b6ac43312297a9a5019dca167e6bc280a114eb0
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 04 Sep 2025 06:09:52 +0000
	      Finished:     Thu, 04 Sep 2025 06:09:52 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s8vmj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-s8vmj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-856205
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.092s (2.092s including waiting). Image size: 4631262 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-7pjg2
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-856205/192.168.49.2
	Start Time:       Thu, 04 Sep 2025 06:09:31 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-njpm7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-njpm7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-7pjg2 to functional-856205
	  Normal   Pulling    7m20s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m20s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     7m20s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    30s (x42 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     30s (x42 over 10m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-ls6vq
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-856205/192.168.49.2
	Start Time:       Thu, 04 Sep 2025 06:10:05 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.12
	IPs:
	  IP:           10.244.0.12
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nn6w6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-nn6w6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-ls6vq to functional-856205
	  Normal   Pulling    6m13s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m13s (x5 over 9m35s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     6m13s (x5 over 9m35s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m33s (x19 over 9m35s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    0s (x39 over 9m35s)     kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-lgmr4" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-856205 describe pod busybox-mount hello-node-75c85bcc94-7pjg2 hello-node-connect-7d85dfc575-ls6vq kubernetes-dashboard-855c9754f9-lgmr4: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.85s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-856205 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-856205 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-7pjg2" [6391c560-49d2-4412-a202-0640a3dbb40c] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-856205 -n functional-856205
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-09-04 06:19:32.165601804 +0000 UTC m=+1162.954632432
functional_test.go:1460: (dbg) Run:  kubectl --context functional-856205 describe po hello-node-75c85bcc94-7pjg2 -n default
functional_test.go:1460: (dbg) kubectl --context functional-856205 describe po hello-node-75c85bcc94-7pjg2 -n default:
Name:             hello-node-75c85bcc94-7pjg2
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-856205/192.168.49.2
Start Time:       Thu, 04 Sep 2025 06:09:31 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-njpm7 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-njpm7:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-7pjg2 to functional-856205
Normal   Pulling    6m45s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m45s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     6m45s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m53s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m42s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-856205 logs hello-node-75c85bcc94-7pjg2 -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-856205 logs hello-node-75c85bcc94-7pjg2 -n default: exit status 1 (68.159212ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-7pjg2" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-856205 logs hello-node-75c85bcc94-7pjg2 -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-856205 service --namespace=default --https --url hello-node: exit status 115 (510.280148ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30219
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-856205 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-856205 service hello-node --url --format={{.IP}}: exit status 115 (512.612726ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-856205 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-856205 service hello-node --url: exit status 115 (511.999484ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30219
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-856205 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30219
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-ctkhj" [191398b6-c62e-4c25-9bed-1fea30f5fed5] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-869290 -n old-k8s-version-869290
start_stop_delete_test.go:272: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-09-04 07:00:35.953430295 +0000 UTC m=+3626.742460914
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context old-k8s-version-869290 describe po kubernetes-dashboard-8694d4445c-ctkhj -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context old-k8s-version-869290 describe po kubernetes-dashboard-8694d4445c-ctkhj -n kubernetes-dashboard:
Name:             kubernetes-dashboard-8694d4445c-ctkhj
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             old-k8s-version-869290/192.168.76.2
Start Time:       Thu, 04 Sep 2025 06:51:08 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=8694d4445c
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/kubernetes-dashboard-8694d4445c
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d8fhm (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-d8fhm:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  9m27s                  default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-8694d4445c-ctkhj to old-k8s-version-869290
Warning  Failed     8m11s                  kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": loading manifest for target platform: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    6m32s (x4 over 9m27s)  kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     6m2s (x4 over 8m57s)   kubelet            Error: ErrImagePull
Warning  Failed     5m37s (x6 over 8m56s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    5m25s (x7 over 8m56s)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     4m4s (x4 over 8m57s)   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context old-k8s-version-869290 logs kubernetes-dashboard-8694d4445c-ctkhj -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context old-k8s-version-869290 logs kubernetes-dashboard-8694d4445c-ctkhj -n kubernetes-dashboard: exit status 1 (80.094941ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-8694d4445c-ctkhj" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context old-k8s-version-869290 logs kubernetes-dashboard-8694d4445c-ctkhj -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-869290
helpers_test.go:243: (dbg) docker inspect old-k8s-version-869290:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "206772efca5eed4f4fbcaa056239cf810dc6bebf5333becc0ff89e5c404db713",
	        "Created": "2025-09-04T06:49:35.46602092Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1771260,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-04T06:50:43.793377686Z",
	            "FinishedAt": "2025-09-04T06:50:43.068021983Z"
	        },
	        "Image": "sha256:6f7d8b3ae805e64eb4efe058a75d43d384fe5989473cee7f8e24ea90eca28309",
	        "ResolvConfPath": "/var/lib/docker/containers/206772efca5eed4f4fbcaa056239cf810dc6bebf5333becc0ff89e5c404db713/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/206772efca5eed4f4fbcaa056239cf810dc6bebf5333becc0ff89e5c404db713/hostname",
	        "HostsPath": "/var/lib/docker/containers/206772efca5eed4f4fbcaa056239cf810dc6bebf5333becc0ff89e5c404db713/hosts",
	        "LogPath": "/var/lib/docker/containers/206772efca5eed4f4fbcaa056239cf810dc6bebf5333becc0ff89e5c404db713/206772efca5eed4f4fbcaa056239cf810dc6bebf5333becc0ff89e5c404db713-json.log",
	        "Name": "/old-k8s-version-869290",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-869290:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-869290",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "206772efca5eed4f4fbcaa056239cf810dc6bebf5333becc0ff89e5c404db713",
	                "LowerDir": "/var/lib/docker/overlay2/70054fc1cd8315be99686a375dd5ad1c3d78f07ef6a4c2df95fc8ae6e1b848dd-init/diff:/var/lib/docker/overlay2/00af8677cb60c76ca825d07bd2d1267a5f0b2d8d1147a86a8eb7a1b8e0189af8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/70054fc1cd8315be99686a375dd5ad1c3d78f07ef6a4c2df95fc8ae6e1b848dd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/70054fc1cd8315be99686a375dd5ad1c3d78f07ef6a4c2df95fc8ae6e1b848dd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/70054fc1cd8315be99686a375dd5ad1c3d78f07ef6a4c2df95fc8ae6e1b848dd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-869290",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-869290/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-869290",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-869290",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-869290",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bfd9887ae856137840ff4089e7352aa402b336956352d94f420ad864129004d3",
	            "SandboxKey": "/var/run/docker/netns/bfd9887ae856",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34254"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34255"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34258"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34256"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34257"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-869290": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:ec:85:32:4a:1e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "66257fe74e8a729f876c63df282eb573f7ca67afcf17672f4f62529bc49d57cd",
	                    "EndpointID": "b701321bbecfc061764b1cea2e9550663e6d4b42a47d0062268de3841999df69",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-869290",
	                        "206772efca5e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-869290 -n old-k8s-version-869290
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-869290 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-869290 logs -n 25: (1.205274249s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p no-preload-574576 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                       │ no-preload-574576            │ jenkins │ v1.36.0 │ 04 Sep 25 06:49 UTC │ 04 Sep 25 06:50 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-869290 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-869290       │ jenkins │ v1.36.0 │ 04 Sep 25 06:50 UTC │ 04 Sep 25 06:50 UTC │
	│ stop    │ -p old-k8s-version-869290 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-869290       │ jenkins │ v1.36.0 │ 04 Sep 25 06:50 UTC │ 04 Sep 25 06:50 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-869290 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-869290       │ jenkins │ v1.36.0 │ 04 Sep 25 06:50 UTC │ 04 Sep 25 06:50 UTC │
	│ start   │ -p old-k8s-version-869290 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-869290       │ jenkins │ v1.36.0 │ 04 Sep 25 06:50 UTC │ 04 Sep 25 06:51 UTC │
	│ addons  │ enable metrics-server -p no-preload-574576 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-574576            │ jenkins │ v1.36.0 │ 04 Sep 25 06:50 UTC │ 04 Sep 25 06:50 UTC │
	│ stop    │ -p no-preload-574576 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-574576            │ jenkins │ v1.36.0 │ 04 Sep 25 06:50 UTC │ 04 Sep 25 06:51 UTC │
	│ addons  │ enable dashboard -p no-preload-574576 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-574576            │ jenkins │ v1.36.0 │ 04 Sep 25 06:51 UTC │ 04 Sep 25 06:51 UTC │
	│ start   │ -p no-preload-574576 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                       │ no-preload-574576            │ jenkins │ v1.36.0 │ 04 Sep 25 06:51 UTC │ 04 Sep 25 06:51 UTC │
	│ start   │ -p cert-expiration-620042 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-620042       │ jenkins │ v1.36.0 │ 04 Sep 25 06:51 UTC │ 04 Sep 25 06:51 UTC │
	│ delete  │ -p cert-expiration-620042                                                                                                                                                                                                                     │ cert-expiration-620042       │ jenkins │ v1.36.0 │ 04 Sep 25 06:51 UTC │ 04 Sep 25 06:52 UTC │
	│ start   │ -p embed-certs-589812 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                        │ embed-certs-589812           │ jenkins │ v1.36.0 │ 04 Sep 25 06:52 UTC │ 04 Sep 25 06:53 UTC │
	│ start   │ -p kubernetes-upgrade-892549 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-892549    │ jenkins │ v1.36.0 │ 04 Sep 25 06:52 UTC │                     │
	│ start   │ -p kubernetes-upgrade-892549 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-892549    │ jenkins │ v1.36.0 │ 04 Sep 25 06:52 UTC │ 04 Sep 25 06:52 UTC │
	│ delete  │ -p kubernetes-upgrade-892549                                                                                                                                                                                                                  │ kubernetes-upgrade-892549    │ jenkins │ v1.36.0 │ 04 Sep 25 06:52 UTC │ 04 Sep 25 06:52 UTC │
	│ delete  │ -p disable-driver-mounts-393542                                                                                                                                                                                                               │ disable-driver-mounts-393542 │ jenkins │ v1.36.0 │ 04 Sep 25 06:52 UTC │ 04 Sep 25 06:52 UTC │
	│ start   │ -p default-k8s-diff-port-520775 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-520775 │ jenkins │ v1.36.0 │ 04 Sep 25 06:52 UTC │ 04 Sep 25 06:53 UTC │
	│ addons  │ enable metrics-server -p embed-certs-589812 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-589812           │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:53 UTC │
	│ stop    │ -p embed-certs-589812 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-589812           │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:53 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-520775 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-520775 │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:53 UTC │
	│ stop    │ -p default-k8s-diff-port-520775 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-520775 │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:53 UTC │
	│ addons  │ enable dashboard -p embed-certs-589812 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-589812           │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:53 UTC │
	│ start   │ -p embed-certs-589812 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                        │ embed-certs-589812           │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:54 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-520775 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-520775 │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:53 UTC │
	│ start   │ -p default-k8s-diff-port-520775 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-520775 │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:54 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/04 06:53:49
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 06:53:49.418555 1796928 out.go:360] Setting OutFile to fd 1 ...
	I0904 06:53:49.418725 1796928 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:53:49.418774 1796928 out.go:374] Setting ErrFile to fd 2...
	I0904 06:53:49.418785 1796928 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:53:49.419117 1796928 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1516970/.minikube/bin
	I0904 06:53:49.419985 1796928 out.go:368] Setting JSON to false
	I0904 06:53:49.421632 1796928 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":16579,"bootTime":1756952250,"procs":289,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 06:53:49.421749 1796928 start.go:140] virtualization: kvm guest
	I0904 06:53:49.423972 1796928 out.go:179] * [default-k8s-diff-port-520775] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 06:53:49.425842 1796928 notify.go:220] Checking for updates...
	I0904 06:53:49.425850 1796928 out.go:179]   - MINIKUBE_LOCATION=21409
	I0904 06:53:49.427436 1796928 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 06:53:49.428783 1796928 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-1516970/kubeconfig
	I0904 06:53:49.429989 1796928 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1516970/.minikube
	I0904 06:53:49.431134 1796928 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 06:53:49.432406 1796928 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 06:53:49.434250 1796928 config.go:182] Loaded profile config "default-k8s-diff-port-520775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 06:53:49.435089 1796928 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 06:53:49.462481 1796928 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0904 06:53:49.462577 1796928 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 06:53:49.536244 1796928 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:74 SystemTime:2025-09-04 06:53:49.525128821 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 06:53:49.536390 1796928 docker.go:318] overlay module found
	I0904 06:53:49.539526 1796928 out.go:179] * Using the docker driver based on existing profile
	I0904 06:53:49.540719 1796928 start.go:304] selected driver: docker
	I0904 06:53:49.540734 1796928 start.go:918] validating driver "docker" against &{Name:default-k8s-diff-port-520775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-520775 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountS
tring: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 06:53:49.540822 1796928 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 06:53:49.541681 1796928 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 06:53:49.594566 1796928 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:74 SystemTime:2025-09-04 06:53:49.585030944 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 06:53:49.595064 1796928 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 06:53:49.595111 1796928 cni.go:84] Creating CNI manager for ""
	I0904 06:53:49.595174 1796928 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0904 06:53:49.595223 1796928 start.go:348] cluster config:
	{Name:default-k8s-diff-port-520775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-520775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 06:53:49.597216 1796928 out.go:179] * Starting "default-k8s-diff-port-520775" primary control-plane node in "default-k8s-diff-port-520775" cluster
	I0904 06:53:49.598401 1796928 cache.go:123] Beginning downloading kic base image for docker with crio
	I0904 06:53:49.599526 1796928 out.go:179] * Pulling base image v0.0.47-1756936034-21409 ...
	I0904 06:53:49.604882 1796928 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 06:53:49.604957 1796928 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21409-1516970/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0904 06:53:49.604977 1796928 cache.go:58] Caching tarball of preloaded images
	I0904 06:53:49.604992 1796928 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc in local docker daemon
	I0904 06:53:49.605104 1796928 preload.go:172] Found /home/jenkins/minikube-integration/21409-1516970/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0904 06:53:49.605123 1796928 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0904 06:53:49.605341 1796928 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/config.json ...
	I0904 06:53:49.637613 1796928 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc in local docker daemon, skipping pull
	I0904 06:53:49.637635 1796928 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc exists in daemon, skipping load
	I0904 06:53:49.637647 1796928 cache.go:232] Successfully downloaded all kic artifacts
	I0904 06:53:49.637673 1796928 start.go:360] acquireMachinesLock for default-k8s-diff-port-520775: {Name:mkd2b36988a85f8d5c3a19497a99007da8aadae2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 06:53:49.637729 1796928 start.go:364] duration metric: took 33.006µs to acquireMachinesLock for "default-k8s-diff-port-520775"
	I0904 06:53:49.637749 1796928 start.go:96] Skipping create...Using existing machine configuration
	I0904 06:53:49.637756 1796928 fix.go:54] fixHost starting: 
	I0904 06:53:49.637963 1796928 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-520775 --format={{.State.Status}}
	I0904 06:53:49.656941 1796928 fix.go:112] recreateIfNeeded on default-k8s-diff-port-520775: state=Stopped err=<nil>
	W0904 06:53:49.656986 1796928 fix.go:138] unexpected machine state, will restart: <nil>
	I0904 06:53:49.524554 1794879 node_ready.go:49] node "embed-certs-589812" is "Ready"
	I0904 06:53:49.524655 1794879 node_ready.go:38] duration metric: took 3.407781482s for node "embed-certs-589812" to be "Ready" ...
	I0904 06:53:49.524688 1794879 api_server.go:52] waiting for apiserver process to appear ...
	I0904 06:53:49.524773 1794879 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 06:53:51.714274 1794879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.110482825s)
	I0904 06:53:51.714323 1794879 addons.go:479] Verifying addon metrics-server=true in "embed-certs-589812"
	I0904 06:53:51.714427 1794879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.971633666s)
	I0904 06:53:51.714457 1794879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (4.901617894s)
	I0904 06:53:51.714590 1794879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.702133151s)
	I0904 06:53:51.714600 1794879 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.189780106s)
	I0904 06:53:51.714619 1794879 api_server.go:72] duration metric: took 5.87883589s to wait for apiserver process to appear ...
	I0904 06:53:51.714626 1794879 api_server.go:88] waiting for apiserver healthz status ...
	I0904 06:53:51.714643 1794879 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0904 06:53:51.716342 1794879 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-589812 addons enable metrics-server
	
	I0904 06:53:51.722283 1794879 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 06:53:51.722308 1794879 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 06:53:51.730360 1794879 out.go:179] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	I0904 06:53:51.731942 1794879 addons.go:514] duration metric: took 5.89615636s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I0904 06:53:52.215034 1794879 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0904 06:53:52.219745 1794879 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 06:53:52.219786 1794879 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 06:53:52.715125 1794879 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0904 06:53:52.719686 1794879 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 06:53:52.719714 1794879 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 06:53:53.215303 1794879 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0904 06:53:53.219535 1794879 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0904 06:53:53.220593 1794879 api_server.go:141] control plane version: v1.34.0
	I0904 06:53:53.220626 1794879 api_server.go:131] duration metric: took 1.505992813s to wait for apiserver health ...
	I0904 06:53:53.220641 1794879 system_pods.go:43] waiting for kube-system pods to appear ...
	I0904 06:53:53.224544 1794879 system_pods.go:59] 9 kube-system pods found
	I0904 06:53:53.224588 1794879 system_pods.go:61] "coredns-66bc5c9577-j5gww" [e3612616-edf7-408c-8d20-966c456e4a43] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 06:53:53.224605 1794879 system_pods.go:61] "etcd-embed-certs-589812" [ffde7899-36bf-4837-8a40-30b11624fd40] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0904 06:53:53.224618 1794879 system_pods.go:61] "kindnet-wtgxv" [7570cefc-495d-4c68-83e5-04a04d12775a] Running
	I0904 06:53:53.224628 1794879 system_pods.go:61] "kube-apiserver-embed-certs-589812" [095a13f2-431a-46bd-a6b2-d9f475bd60cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 06:53:53.224640 1794879 system_pods.go:61] "kube-controller-manager-embed-certs-589812" [25e8105c-95a2-4761-a9a6-3e01225cde8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0904 06:53:53.224650 1794879 system_pods.go:61] "kube-proxy-xqvlx" [281c6535-72f3-429b-b4b1-df56cb3de2f5] Running
	I0904 06:53:53.224659 1794879 system_pods.go:61] "kube-scheduler-embed-certs-589812" [dbb61597-bbca-422b-b8b6-45821409cb91] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0904 06:53:53.224682 1794879 system_pods.go:61] "metrics-server-746fcd58dc-prlxr" [58b70501-6011-4b99-80ff-1f9b422ae481] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 06:53:53.224694 1794879 system_pods.go:61] "storage-provisioner" [df8bd0bd-3bd4-461e-b276-edf75af8897e] Running
	I0904 06:53:53.224704 1794879 system_pods.go:74] duration metric: took 4.053609ms to wait for pod list to return data ...
	I0904 06:53:53.224716 1794879 default_sa.go:34] waiting for default service account to be created ...
	I0904 06:53:53.227290 1794879 default_sa.go:45] found service account: "default"
	I0904 06:53:53.227311 1794879 default_sa.go:55] duration metric: took 2.585826ms for default service account to be created ...
	I0904 06:53:53.227319 1794879 system_pods.go:116] waiting for k8s-apps to be running ...
	I0904 06:53:53.230112 1794879 system_pods.go:86] 9 kube-system pods found
	I0904 06:53:53.230142 1794879 system_pods.go:89] "coredns-66bc5c9577-j5gww" [e3612616-edf7-408c-8d20-966c456e4a43] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 06:53:53.230154 1794879 system_pods.go:89] "etcd-embed-certs-589812" [ffde7899-36bf-4837-8a40-30b11624fd40] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0904 06:53:53.230162 1794879 system_pods.go:89] "kindnet-wtgxv" [7570cefc-495d-4c68-83e5-04a04d12775a] Running
	I0904 06:53:53.230172 1794879 system_pods.go:89] "kube-apiserver-embed-certs-589812" [095a13f2-431a-46bd-a6b2-d9f475bd60cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 06:53:53.230180 1794879 system_pods.go:89] "kube-controller-manager-embed-certs-589812" [25e8105c-95a2-4761-a9a6-3e01225cde8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0904 06:53:53.230191 1794879 system_pods.go:89] "kube-proxy-xqvlx" [281c6535-72f3-429b-b4b1-df56cb3de2f5] Running
	I0904 06:53:53.230201 1794879 system_pods.go:89] "kube-scheduler-embed-certs-589812" [dbb61597-bbca-422b-b8b6-45821409cb91] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0904 06:53:53.230212 1794879 system_pods.go:89] "metrics-server-746fcd58dc-prlxr" [58b70501-6011-4b99-80ff-1f9b422ae481] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 06:53:53.230218 1794879 system_pods.go:89] "storage-provisioner" [df8bd0bd-3bd4-461e-b276-edf75af8897e] Running
	I0904 06:53:53.230227 1794879 system_pods.go:126] duration metric: took 2.90283ms to wait for k8s-apps to be running ...
	I0904 06:53:53.230240 1794879 system_svc.go:44] waiting for kubelet service to be running ....
	I0904 06:53:53.230287 1794879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 06:53:53.241829 1794879 system_svc.go:56] duration metric: took 11.584133ms WaitForService to wait for kubelet
	I0904 06:53:53.241853 1794879 kubeadm.go:578] duration metric: took 7.406070053s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 06:53:53.241869 1794879 node_conditions.go:102] verifying NodePressure condition ...
	I0904 06:53:53.244406 1794879 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0904 06:53:53.244445 1794879 node_conditions.go:123] node cpu capacity is 8
	I0904 06:53:53.244459 1794879 node_conditions.go:105] duration metric: took 2.584951ms to run NodePressure ...
	I0904 06:53:53.244478 1794879 start.go:241] waiting for startup goroutines ...
	I0904 06:53:53.244492 1794879 start.go:246] waiting for cluster config update ...
	I0904 06:53:53.244509 1794879 start.go:255] writing updated cluster config ...
	I0904 06:53:53.244784 1794879 ssh_runner.go:195] Run: rm -f paused
	I0904 06:53:53.248131 1794879 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 06:53:53.251511 1794879 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-j5gww" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:53:49.659280 1796928 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-520775" ...
	I0904 06:53:49.659366 1796928 cli_runner.go:164] Run: docker start default-k8s-diff-port-520775
	I0904 06:53:49.944765 1796928 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-520775 --format={{.State.Status}}
	I0904 06:53:49.965484 1796928 kic.go:430] container "default-k8s-diff-port-520775" state is running.
	I0904 06:53:49.965966 1796928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-520775
	I0904 06:53:49.984536 1796928 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/config.json ...
	I0904 06:53:49.984754 1796928 machine.go:93] provisionDockerMachine start ...
	I0904 06:53:49.984828 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:50.006739 1796928 main.go:141] libmachine: Using SSH client type: native
	I0904 06:53:50.007122 1796928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 34279 <nil> <nil>}
	I0904 06:53:50.007149 1796928 main.go:141] libmachine: About to run SSH command:
	hostname
	I0904 06:53:50.011282 1796928 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0904 06:53:53.135459 1796928 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-520775
	
	I0904 06:53:53.135490 1796928 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-520775"
	I0904 06:53:53.135560 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:53.153046 1796928 main.go:141] libmachine: Using SSH client type: native
	I0904 06:53:53.153307 1796928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 34279 <nil> <nil>}
	I0904 06:53:53.153323 1796928 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-520775 && echo "default-k8s-diff-port-520775" | sudo tee /etc/hostname
	I0904 06:53:53.284177 1796928 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-520775
	
	I0904 06:53:53.284278 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:53.302854 1796928 main.go:141] libmachine: Using SSH client type: native
	I0904 06:53:53.303062 1796928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 34279 <nil> <nil>}
	I0904 06:53:53.303082 1796928 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-520775' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-520775/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-520775' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0904 06:53:53.428269 1796928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 06:53:53.428306 1796928 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-1516970/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-1516970/.minikube}
	I0904 06:53:53.428357 1796928 ubuntu.go:190] setting up certificates
	I0904 06:53:53.428381 1796928 provision.go:84] configureAuth start
	I0904 06:53:53.428449 1796928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-520775
	I0904 06:53:53.447935 1796928 provision.go:143] copyHostCerts
	I0904 06:53:53.448036 1796928 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.pem, removing ...
	I0904 06:53:53.448051 1796928 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.pem
	I0904 06:53:53.448113 1796928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.pem (1082 bytes)
	I0904 06:53:53.448215 1796928 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1516970/.minikube/cert.pem, removing ...
	I0904 06:53:53.448223 1796928 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1516970/.minikube/cert.pem
	I0904 06:53:53.448247 1796928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-1516970/.minikube/cert.pem (1123 bytes)
	I0904 06:53:53.448320 1796928 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1516970/.minikube/key.pem, removing ...
	I0904 06:53:53.448326 1796928 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1516970/.minikube/key.pem
	I0904 06:53:53.448347 1796928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-1516970/.minikube/key.pem (1675 bytes)
	I0904 06:53:53.448409 1796928 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-520775 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-520775 localhost minikube]
	I0904 06:53:53.540900 1796928 provision.go:177] copyRemoteCerts
	I0904 06:53:53.540966 1796928 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0904 06:53:53.541003 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:53.558727 1796928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/default-k8s-diff-port-520775/id_rsa Username:docker}
	I0904 06:53:53.650335 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0904 06:53:53.677813 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0904 06:53:53.700987 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0904 06:53:53.724318 1796928 provision.go:87] duration metric: took 295.918548ms to configureAuth
	I0904 06:53:53.724345 1796928 ubuntu.go:206] setting minikube options for container-runtime
	I0904 06:53:53.724529 1796928 config.go:182] Loaded profile config "default-k8s-diff-port-520775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 06:53:53.724626 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:53.743241 1796928 main.go:141] libmachine: Using SSH client type: native
	I0904 06:53:53.743467 1796928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 34279 <nil> <nil>}
	I0904 06:53:53.743488 1796928 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0904 06:53:54.045106 1796928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0904 06:53:54.045134 1796928 machine.go:96] duration metric: took 4.060362432s to provisionDockerMachine
	I0904 06:53:54.045148 1796928 start.go:293] postStartSetup for "default-k8s-diff-port-520775" (driver="docker")
	I0904 06:53:54.045187 1796928 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0904 06:53:54.045256 1796928 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0904 06:53:54.045307 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:54.064198 1796928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/default-k8s-diff-port-520775/id_rsa Username:docker}
	I0904 06:53:54.152873 1796928 ssh_runner.go:195] Run: cat /etc/os-release
	I0904 06:53:54.156293 1796928 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0904 06:53:54.156319 1796928 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0904 06:53:54.156326 1796928 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0904 06:53:54.156333 1796928 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0904 06:53:54.156345 1796928 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1516970/.minikube/addons for local assets ...
	I0904 06:53:54.156399 1796928 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1516970/.minikube/files for local assets ...
	I0904 06:53:54.156481 1796928 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-1516970/.minikube/files/etc/ssl/certs/15207162.pem -> 15207162.pem in /etc/ssl/certs
	I0904 06:53:54.156610 1796928 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0904 06:53:54.165073 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/files/etc/ssl/certs/15207162.pem --> /etc/ssl/certs/15207162.pem (1708 bytes)
	I0904 06:53:54.187780 1796928 start.go:296] duration metric: took 142.614938ms for postStartSetup
	I0904 06:53:54.187887 1796928 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 06:53:54.187937 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:54.205683 1796928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/default-k8s-diff-port-520775/id_rsa Username:docker}
	I0904 06:53:54.292859 1796928 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0904 06:53:54.297265 1796928 fix.go:56] duration metric: took 4.65950064s for fixHost
	I0904 06:53:54.297289 1796928 start.go:83] releasing machines lock for "default-k8s-diff-port-520775", held for 4.659549727s
	I0904 06:53:54.297358 1796928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-520775
	I0904 06:53:54.315327 1796928 ssh_runner.go:195] Run: cat /version.json
	I0904 06:53:54.315393 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:54.315420 1796928 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0904 06:53:54.315484 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:54.335338 1796928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/default-k8s-diff-port-520775/id_rsa Username:docker}
	I0904 06:53:54.336109 1796928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/default-k8s-diff-port-520775/id_rsa Username:docker}
	I0904 06:53:54.493584 1796928 ssh_runner.go:195] Run: systemctl --version
	I0904 06:53:54.498345 1796928 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0904 06:53:54.638467 1796928 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0904 06:53:54.642924 1796928 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 06:53:54.652284 1796928 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0904 06:53:54.652347 1796928 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 06:53:54.660849 1796928 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0904 06:53:54.660875 1796928 start.go:495] detecting cgroup driver to use...
	I0904 06:53:54.660913 1796928 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0904 06:53:54.660966 1796928 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0904 06:53:54.672418 1796928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 06:53:54.683134 1796928 docker.go:218] disabling cri-docker service (if available) ...
	I0904 06:53:54.683181 1796928 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0904 06:53:54.695400 1796928 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0904 06:53:54.706646 1796928 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0904 06:53:54.793740 1796928 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0904 06:53:54.873854 1796928 docker.go:234] disabling docker service ...
	I0904 06:53:54.873933 1796928 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0904 06:53:54.885885 1796928 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0904 06:53:54.896737 1796928 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0904 06:53:54.980788 1796928 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0904 06:53:55.057730 1796928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0904 06:53:55.068310 1796928 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 06:53:55.083683 1796928 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0904 06:53:55.083736 1796928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:53:55.093158 1796928 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0904 06:53:55.093215 1796928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:53:55.102672 1796928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:53:55.113082 1796928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:53:55.122399 1796928 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0904 06:53:55.131334 1796928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:53:55.140602 1796928 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:53:55.150009 1796928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:53:55.159908 1796928 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0904 06:53:55.167649 1796928 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0904 06:53:55.175680 1796928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 06:53:55.254239 1796928 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0904 06:53:55.362926 1796928 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0904 06:53:55.363001 1796928 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0904 06:53:55.366648 1796928 start.go:563] Will wait 60s for crictl version
	I0904 06:53:55.366695 1796928 ssh_runner.go:195] Run: which crictl
	I0904 06:53:55.369962 1796928 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0904 06:53:55.403453 1796928 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0904 06:53:55.403538 1796928 ssh_runner.go:195] Run: crio --version
	I0904 06:53:55.441474 1796928 ssh_runner.go:195] Run: crio --version
	I0904 06:53:55.479608 1796928 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0904 06:53:55.480915 1796928 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-520775 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0904 06:53:55.497935 1796928 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0904 06:53:55.502150 1796928 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 06:53:55.514295 1796928 kubeadm.go:875] updating cluster {Name:default-k8s-diff-port-520775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-520775 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0904 06:53:55.514485 1796928 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 06:53:55.514556 1796928 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 06:53:55.564218 1796928 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 06:53:55.564245 1796928 crio.go:433] Images already preloaded, skipping extraction
	I0904 06:53:55.564292 1796928 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 06:53:55.602409 1796928 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 06:53:55.602436 1796928 cache_images.go:85] Images are preloaded, skipping loading
	I0904 06:53:55.602446 1796928 kubeadm.go:926] updating node { 192.168.103.2 8444 v1.34.0 crio true true} ...
	I0904 06:53:55.602577 1796928 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-520775 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-520775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0904 06:53:55.602645 1796928 ssh_runner.go:195] Run: crio config
	I0904 06:53:55.664543 1796928 cni.go:84] Creating CNI manager for ""
	I0904 06:53:55.664570 1796928 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0904 06:53:55.664584 1796928 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0904 06:53:55.664612 1796928 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-520775 NodeName:default-k8s-diff-port-520775 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0904 06:53:55.664768 1796928 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-520775"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0904 06:53:55.664845 1796928 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0904 06:53:55.673590 1796928 binaries.go:44] Found k8s binaries, skipping transfer
	I0904 06:53:55.673661 1796928 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0904 06:53:55.682016 1796928 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0904 06:53:55.699448 1796928 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0904 06:53:55.717472 1796928 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I0904 06:53:55.734579 1796928 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0904 06:53:55.737941 1796928 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 06:53:55.748899 1796928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 06:53:55.834506 1796928 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 06:53:55.848002 1796928 certs.go:68] Setting up /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775 for IP: 192.168.103.2
	I0904 06:53:55.848028 1796928 certs.go:194] generating shared ca certs ...
	I0904 06:53:55.848048 1796928 certs.go:226] acquiring lock for ca certs: {Name:mk2d06825c36f44304767b415a9a93c84edb2667 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:53:55.848186 1796928 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.key
	I0904 06:53:55.848228 1796928 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/proxy-client-ca.key
	I0904 06:53:55.848237 1796928 certs.go:256] generating profile certs ...
	I0904 06:53:55.848310 1796928 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/client.key
	I0904 06:53:55.848365 1796928 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/apiserver.key.6ec15110
	I0904 06:53:55.848406 1796928 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/proxy-client.key
	I0904 06:53:55.848517 1796928 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/1520716.pem (1338 bytes)
	W0904 06:53:55.848547 1796928 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/1520716_empty.pem, impossibly tiny 0 bytes
	I0904 06:53:55.848556 1796928 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca-key.pem (1675 bytes)
	I0904 06:53:55.848578 1796928 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca.pem (1082 bytes)
	I0904 06:53:55.848601 1796928 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/cert.pem (1123 bytes)
	I0904 06:53:55.848627 1796928 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/key.pem (1675 bytes)
	I0904 06:53:55.848669 1796928 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/files/etc/ssl/certs/15207162.pem (1708 bytes)
	I0904 06:53:55.849251 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0904 06:53:55.876639 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0904 06:53:55.904012 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0904 06:53:55.936371 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0904 06:53:56.018233 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0904 06:53:56.041340 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0904 06:53:56.065911 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0904 06:53:56.089737 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0904 06:53:56.112935 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/files/etc/ssl/certs/15207162.pem --> /usr/share/ca-certificates/15207162.pem (1708 bytes)
	I0904 06:53:56.138060 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0904 06:53:56.162385 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/1520716.pem --> /usr/share/ca-certificates/1520716.pem (1338 bytes)
	I0904 06:53:56.185546 1796928 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0904 06:53:56.202891 1796928 ssh_runner.go:195] Run: openssl version
	I0904 06:53:56.208611 1796928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15207162.pem && ln -fs /usr/share/ca-certificates/15207162.pem /etc/ssl/certs/15207162.pem"
	I0904 06:53:56.219865 1796928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15207162.pem
	I0904 06:53:56.223785 1796928 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  4 06:07 /usr/share/ca-certificates/15207162.pem
	I0904 06:53:56.223867 1796928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15207162.pem
	I0904 06:53:56.231657 1796928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15207162.pem /etc/ssl/certs/3ec20f2e.0"
	I0904 06:53:56.243527 1796928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0904 06:53:56.253334 1796928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0904 06:53:56.257449 1796928 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  4 06:00 /usr/share/ca-certificates/minikubeCA.pem
	I0904 06:53:56.257517 1796928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0904 06:53:56.264253 1796928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0904 06:53:56.273629 1796928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1520716.pem && ln -fs /usr/share/ca-certificates/1520716.pem /etc/ssl/certs/1520716.pem"
	I0904 06:53:56.283120 1796928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1520716.pem
	I0904 06:53:56.286378 1796928 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  4 06:07 /usr/share/ca-certificates/1520716.pem
	I0904 06:53:56.286450 1796928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1520716.pem
	I0904 06:53:56.293207 1796928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1520716.pem /etc/ssl/certs/51391683.0"
	I0904 06:53:56.301668 1796928 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0904 06:53:56.308006 1796928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0904 06:53:56.315155 1796928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0904 06:53:56.322059 1796928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0904 06:53:56.329568 1796928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0904 06:53:56.337737 1796928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0904 06:53:56.345511 1796928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0904 06:53:56.353351 1796928 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-520775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-520775 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 06:53:56.353482 1796928 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0904 06:53:56.353539 1796928 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0904 06:53:56.397941 1796928 cri.go:89] found id: ""
	I0904 06:53:56.398012 1796928 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0904 06:53:56.408886 1796928 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0904 06:53:56.408981 1796928 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0904 06:53:56.409041 1796928 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0904 06:53:56.424530 1796928 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0904 06:53:56.425727 1796928 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-520775" does not appear in /home/jenkins/minikube-integration/21409-1516970/kubeconfig
	I0904 06:53:56.426580 1796928 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-1516970/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-520775" cluster setting kubeconfig missing "default-k8s-diff-port-520775" context setting]
	I0904 06:53:56.427949 1796928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1516970/kubeconfig: {Name:mkbb6a6dae4a65ddd44b276a5562dc0a264116a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:53:56.430031 1796928 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0904 06:53:56.444430 1796928 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.103.2
	I0904 06:53:56.444470 1796928 kubeadm.go:593] duration metric: took 35.478353ms to restartPrimaryControlPlane
	I0904 06:53:56.444481 1796928 kubeadm.go:394] duration metric: took 91.143305ms to StartCluster
	I0904 06:53:56.444503 1796928 settings.go:142] acquiring lock: {Name:mk2d1c8a569b62879275d6daa2b799b595d6bca2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:53:56.444560 1796928 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-1516970/kubeconfig
	I0904 06:53:56.447245 1796928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1516970/kubeconfig: {Name:mkbb6a6dae4a65ddd44b276a5562dc0a264116a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:53:56.447495 1796928 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0904 06:53:56.447711 1796928 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0904 06:53:56.447836 1796928 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-520775"
	I0904 06:53:56.447860 1796928 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-520775"
	W0904 06:53:56.447868 1796928 addons.go:247] addon storage-provisioner should already be in state true
	I0904 06:53:56.447888 1796928 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-520775"
	I0904 06:53:56.447903 1796928 host.go:66] Checking if "default-k8s-diff-port-520775" exists ...
	I0904 06:53:56.447928 1796928 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-520775"
	I0904 06:53:56.447921 1796928 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-520775"
	I0904 06:53:56.447939 1796928 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-520775"
	I0904 06:53:56.447970 1796928 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-520775"
	I0904 06:53:56.447970 1796928 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-520775"
	W0904 06:53:56.447979 1796928 addons.go:247] addon dashboard should already be in state true
	I0904 06:53:56.447980 1796928 config.go:182] Loaded profile config "default-k8s-diff-port-520775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	W0904 06:53:56.447982 1796928 addons.go:247] addon metrics-server should already be in state true
	I0904 06:53:56.448017 1796928 host.go:66] Checking if "default-k8s-diff-port-520775" exists ...
	I0904 06:53:56.448020 1796928 host.go:66] Checking if "default-k8s-diff-port-520775" exists ...
	I0904 06:53:56.448276 1796928 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-520775 --format={{.State.Status}}
	I0904 06:53:56.448431 1796928 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-520775 --format={{.State.Status}}
	I0904 06:53:56.448473 1796928 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-520775 --format={{.State.Status}}
	I0904 06:53:56.448520 1796928 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-520775 --format={{.State.Status}}
	I0904 06:53:56.450093 1796928 out.go:179] * Verifying Kubernetes components...
	I0904 06:53:56.451389 1796928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 06:53:56.482390 1796928 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-520775"
	W0904 06:53:56.482412 1796928 addons.go:247] addon default-storageclass should already be in state true
	I0904 06:53:56.482437 1796928 host.go:66] Checking if "default-k8s-diff-port-520775" exists ...
	I0904 06:53:56.482730 1796928 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-520775 --format={{.State.Status}}
	I0904 06:53:56.485071 1796928 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 06:53:56.485089 1796928 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0904 06:53:56.488270 1796928 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 06:53:56.488294 1796928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0904 06:53:56.488355 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:56.490382 1796928 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0904 06:53:56.491521 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0904 06:53:56.491536 1796928 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0904 06:53:56.491584 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:56.496773 1796928 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	W0904 06:53:55.257485 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:53:57.757496 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	I0904 06:53:56.497920 1796928 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0904 06:53:56.497941 1796928 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0904 06:53:56.498005 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:56.511983 1796928 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0904 06:53:56.512010 1796928 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0904 06:53:56.512072 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:56.529596 1796928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/default-k8s-diff-port-520775/id_rsa Username:docker}
	I0904 06:53:56.531423 1796928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/default-k8s-diff-port-520775/id_rsa Username:docker}
	I0904 06:53:56.543761 1796928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/default-k8s-diff-port-520775/id_rsa Username:docker}
	I0904 06:53:56.547939 1796928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/default-k8s-diff-port-520775/id_rsa Username:docker}
	I0904 06:53:56.815518 1796928 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 06:53:56.824564 1796928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 06:53:56.900475 1796928 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-520775" to be "Ready" ...
	I0904 06:53:56.903122 1796928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0904 06:53:56.915401 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0904 06:53:56.915439 1796928 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0904 06:53:57.011674 1796928 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0904 06:53:57.011705 1796928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0904 06:53:57.025890 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0904 06:53:57.025929 1796928 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0904 06:53:57.130640 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0904 06:53:57.130669 1796928 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0904 06:53:57.201935 1796928 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0904 06:53:57.201971 1796928 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	W0904 06:53:57.228446 1796928 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 06:53:57.228496 1796928 retry.go:31] will retry after 331.542893ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0904 06:53:57.228576 1796928 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 06:53:57.228595 1796928 retry.go:31] will retry after 234.661911ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 06:53:57.233201 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0904 06:53:57.233235 1796928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0904 06:53:57.312449 1796928 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0904 06:53:57.312483 1796928 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0904 06:53:57.335196 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0904 06:53:57.335296 1796928 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0904 06:53:57.340794 1796928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0904 06:53:57.423747 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0904 06:53:57.423855 1796928 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0904 06:53:57.464378 1796928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0904 06:53:57.517739 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0904 06:53:57.517836 1796928 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0904 06:53:57.560380 1796928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 06:53:57.621494 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0904 06:53:57.621580 1796928 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0904 06:53:57.719817 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0904 06:53:57.719851 1796928 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0904 06:53:57.808921 1796928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0904 06:54:00.222294 1796928 node_ready.go:49] node "default-k8s-diff-port-520775" is "Ready"
	I0904 06:54:00.222393 1796928 node_ready.go:38] duration metric: took 3.321861305s for node "default-k8s-diff-port-520775" to be "Ready" ...
	I0904 06:54:00.222414 1796928 api_server.go:52] waiting for apiserver process to appear ...
	I0904 06:54:00.222514 1796928 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 06:54:02.420531 1796928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.07964965s)
	I0904 06:54:02.420574 1796928 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-520775"
	I0904 06:54:02.420586 1796928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (4.956118872s)
	I0904 06:54:02.420682 1796928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.860244874s)
	I0904 06:54:02.420925 1796928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.611964012s)
	I0904 06:54:02.420956 1796928 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.198413181s)
	I0904 06:54:02.421147 1796928 api_server.go:72] duration metric: took 5.973615373s to wait for apiserver process to appear ...
	I0904 06:54:02.421161 1796928 api_server.go:88] waiting for apiserver healthz status ...
	I0904 06:54:02.421181 1796928 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I0904 06:54:02.422911 1796928 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-520775 addons enable metrics-server
	
	I0904 06:54:02.426397 1796928 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 06:54:02.426463 1796928 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 06:54:02.428576 1796928 out.go:179] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	W0904 06:53:59.759069 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:54:02.258100 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	I0904 06:54:02.429861 1796928 addons.go:514] duration metric: took 5.982154586s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I0904 06:54:02.921448 1796928 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I0904 06:54:02.926218 1796928 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 06:54:02.926239 1796928 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 06:54:03.421924 1796928 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I0904 06:54:03.427035 1796928 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I0904 06:54:03.428103 1796928 api_server.go:141] control plane version: v1.34.0
	I0904 06:54:03.428127 1796928 api_server.go:131] duration metric: took 1.006959868s to wait for apiserver health ...
	I0904 06:54:03.428136 1796928 system_pods.go:43] waiting for kube-system pods to appear ...
	I0904 06:54:03.434471 1796928 system_pods.go:59] 9 kube-system pods found
	I0904 06:54:03.434508 1796928 system_pods.go:61] "coredns-66bc5c9577-hm47q" [e73fad8a-ad1b-475f-a4ea-bfda49587ae8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 06:54:03.434519 1796928 system_pods.go:61] "etcd-default-k8s-diff-port-520775" [5829ac4b-ff8b-4d46-9be9-0947be850651] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0904 06:54:03.434525 1796928 system_pods.go:61] "kindnet-wz7lg" [8e231614-2126-4bd8-b77d-a4e98bfbcd0b] Running
	I0904 06:54:03.434533 1796928 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-520775" [95d6a6b9-81f2-48b3-8343-289600b99b2a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 06:54:03.434544 1796928 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-520775" [69053048-8fce-4b4b-8df8-a8f7415bf602] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0904 06:54:03.434564 1796928 system_pods.go:61] "kube-proxy-zrlrh" [df5878ee-bf16-4a99-894c-1f83484bbc3b] Running
	I0904 06:54:03.434573 1796928 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-520775" [e52ed283-6545-4336-8d7a-e26c18f54b4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0904 06:54:03.434586 1796928 system_pods.go:61] "metrics-server-746fcd58dc-gws8j" [16bf9326-2429-4d6b-a6ed-6dc44262c35e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 06:54:03.434594 1796928 system_pods.go:61] "storage-provisioner" [0f88021c-f0ad-4130-8cb1-06f073f45244] Running
	I0904 06:54:03.434602 1796928 system_pods.go:74] duration metric: took 6.460113ms to wait for pod list to return data ...
	I0904 06:54:03.434614 1796928 default_sa.go:34] waiting for default service account to be created ...
	I0904 06:54:03.437095 1796928 default_sa.go:45] found service account: "default"
	I0904 06:54:03.437116 1796928 default_sa.go:55] duration metric: took 2.49678ms for default service account to be created ...
	I0904 06:54:03.437124 1796928 system_pods.go:116] waiting for k8s-apps to be running ...
	I0904 06:54:03.439954 1796928 system_pods.go:86] 9 kube-system pods found
	I0904 06:54:03.439997 1796928 system_pods.go:89] "coredns-66bc5c9577-hm47q" [e73fad8a-ad1b-475f-a4ea-bfda49587ae8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 06:54:03.440010 1796928 system_pods.go:89] "etcd-default-k8s-diff-port-520775" [5829ac4b-ff8b-4d46-9be9-0947be850651] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0904 06:54:03.440018 1796928 system_pods.go:89] "kindnet-wz7lg" [8e231614-2126-4bd8-b77d-a4e98bfbcd0b] Running
	I0904 06:54:03.440029 1796928 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-520775" [95d6a6b9-81f2-48b3-8343-289600b99b2a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 06:54:03.440043 1796928 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-520775" [69053048-8fce-4b4b-8df8-a8f7415bf602] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0904 06:54:03.440053 1796928 system_pods.go:89] "kube-proxy-zrlrh" [df5878ee-bf16-4a99-894c-1f83484bbc3b] Running
	I0904 06:54:03.440060 1796928 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-520775" [e52ed283-6545-4336-8d7a-e26c18f54b4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0904 06:54:03.440072 1796928 system_pods.go:89] "metrics-server-746fcd58dc-gws8j" [16bf9326-2429-4d6b-a6ed-6dc44262c35e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 06:54:03.440078 1796928 system_pods.go:89] "storage-provisioner" [0f88021c-f0ad-4130-8cb1-06f073f45244] Running
	I0904 06:54:03.440085 1796928 system_pods.go:126] duration metric: took 2.955ms to wait for k8s-apps to be running ...
	I0904 06:54:03.440100 1796928 system_svc.go:44] waiting for kubelet service to be running ....
	I0904 06:54:03.440162 1796928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 06:54:03.451705 1796928 system_svc.go:56] duration metric: took 11.594555ms WaitForService to wait for kubelet
	I0904 06:54:03.451731 1796928 kubeadm.go:578] duration metric: took 7.004201759s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 06:54:03.451748 1796928 node_conditions.go:102] verifying NodePressure condition ...
	I0904 06:54:03.455005 1796928 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0904 06:54:03.455036 1796928 node_conditions.go:123] node cpu capacity is 8
	I0904 06:54:03.455062 1796928 node_conditions.go:105] duration metric: took 3.308068ms to run NodePressure ...
	I0904 06:54:03.455079 1796928 start.go:241] waiting for startup goroutines ...
	I0904 06:54:03.455095 1796928 start.go:246] waiting for cluster config update ...
	I0904 06:54:03.455112 1796928 start.go:255] writing updated cluster config ...
	I0904 06:54:03.455408 1796928 ssh_runner.go:195] Run: rm -f paused
	I0904 06:54:03.458944 1796928 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 06:54:03.462665 1796928 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hm47q" in "kube-system" namespace to be "Ready" or be gone ...
	W0904 06:54:04.757792 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:54:07.257591 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:54:05.468478 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:07.500893 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:09.756895 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:54:12.257352 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:54:09.968652 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:12.468012 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:14.756854 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:54:17.256905 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:54:14.468746 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:16.967726 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:18.968373 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:19.257325 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:54:21.757694 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:54:20.968633 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:23.467871 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:24.256489 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	I0904 06:54:24.756710 1794879 pod_ready.go:94] pod "coredns-66bc5c9577-j5gww" is "Ready"
	I0904 06:54:24.756744 1794879 pod_ready.go:86] duration metric: took 31.505206553s for pod "coredns-66bc5c9577-j5gww" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:24.759357 1794879 pod_ready.go:83] waiting for pod "etcd-embed-certs-589812" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:24.763174 1794879 pod_ready.go:94] pod "etcd-embed-certs-589812" is "Ready"
	I0904 06:54:24.763194 1794879 pod_ready.go:86] duration metric: took 3.815458ms for pod "etcd-embed-certs-589812" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:24.765056 1794879 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-589812" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:24.768709 1794879 pod_ready.go:94] pod "kube-apiserver-embed-certs-589812" is "Ready"
	I0904 06:54:24.768729 1794879 pod_ready.go:86] duration metric: took 3.655905ms for pod "kube-apiserver-embed-certs-589812" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:24.770312 1794879 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-589812" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:24.955369 1794879 pod_ready.go:94] pod "kube-controller-manager-embed-certs-589812" is "Ready"
	I0904 06:54:24.955399 1794879 pod_ready.go:86] duration metric: took 185.06856ms for pod "kube-controller-manager-embed-certs-589812" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:25.155371 1794879 pod_ready.go:83] waiting for pod "kube-proxy-xqvlx" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:25.555016 1794879 pod_ready.go:94] pod "kube-proxy-xqvlx" is "Ready"
	I0904 06:54:25.555045 1794879 pod_ready.go:86] duration metric: took 399.644529ms for pod "kube-proxy-xqvlx" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:25.754864 1794879 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-589812" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:26.155740 1794879 pod_ready.go:94] pod "kube-scheduler-embed-certs-589812" is "Ready"
	I0904 06:54:26.155768 1794879 pod_ready.go:86] duration metric: took 400.874171ms for pod "kube-scheduler-embed-certs-589812" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:26.155779 1794879 pod_ready.go:40] duration metric: took 32.907618487s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 06:54:26.201526 1794879 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0904 06:54:26.203310 1794879 out.go:179] * Done! kubectl is now configured to use "embed-certs-589812" cluster and "default" namespace by default
	W0904 06:54:25.468180 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:27.468649 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:29.468703 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:31.967748 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:34.467966 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	I0904 06:54:36.468207 1796928 pod_ready.go:94] pod "coredns-66bc5c9577-hm47q" is "Ready"
	I0904 06:54:36.468238 1796928 pod_ready.go:86] duration metric: took 33.005546695s for pod "coredns-66bc5c9577-hm47q" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:36.470247 1796928 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-520775" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:36.474087 1796928 pod_ready.go:94] pod "etcd-default-k8s-diff-port-520775" is "Ready"
	I0904 06:54:36.474113 1796928 pod_ready.go:86] duration metric: took 3.802864ms for pod "etcd-default-k8s-diff-port-520775" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:36.476057 1796928 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-520775" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:36.479419 1796928 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-520775" is "Ready"
	I0904 06:54:36.479437 1796928 pod_ready.go:86] duration metric: took 3.359104ms for pod "kube-apiserver-default-k8s-diff-port-520775" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:36.481399 1796928 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-520775" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:36.666267 1796928 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-520775" is "Ready"
	I0904 06:54:36.666294 1796928 pod_ready.go:86] duration metric: took 184.873705ms for pod "kube-controller-manager-default-k8s-diff-port-520775" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:36.866510 1796928 pod_ready.go:83] waiting for pod "kube-proxy-zrlrh" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:37.266395 1796928 pod_ready.go:94] pod "kube-proxy-zrlrh" is "Ready"
	I0904 06:54:37.266428 1796928 pod_ready.go:86] duration metric: took 399.888589ms for pod "kube-proxy-zrlrh" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:37.466543 1796928 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-520775" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:37.866935 1796928 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-520775" is "Ready"
	I0904 06:54:37.866974 1796928 pod_ready.go:86] duration metric: took 400.403816ms for pod "kube-scheduler-default-k8s-diff-port-520775" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:37.866986 1796928 pod_ready.go:40] duration metric: took 34.408008083s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 06:54:37.912300 1796928 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0904 06:54:37.913920 1796928 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-520775" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 04 06:59:05 old-k8s-version-869290 crio[682]: time="2025-09-04 06:59:05.120068685Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=70545ab3-16f1-479a-857e-d5130aaec87d name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:59:08 old-k8s-version-869290 crio[682]: time="2025-09-04 06:59:08.119685575Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=1cdf0d62-92c6-4872-aed0-7faf601b77f4 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:59:08 old-k8s-version-869290 crio[682]: time="2025-09-04 06:59:08.120108253Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=1cdf0d62-92c6-4872-aed0-7faf601b77f4 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:59:16 old-k8s-version-869290 crio[682]: time="2025-09-04 06:59:16.120061937Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=2b1d642c-9d2f-4ad6-bc99-70354f3aebaa name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:59:16 old-k8s-version-869290 crio[682]: time="2025-09-04 06:59:16.120326426Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=2b1d642c-9d2f-4ad6-bc99-70354f3aebaa name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:59:21 old-k8s-version-869290 crio[682]: time="2025-09-04 06:59:21.119719875Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=5508a473-5a4c-4fbf-a3eb-d847146b8689 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:59:21 old-k8s-version-869290 crio[682]: time="2025-09-04 06:59:21.120023725Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=5508a473-5a4c-4fbf-a3eb-d847146b8689 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:59:21 old-k8s-version-869290 crio[682]: time="2025-09-04 06:59:21.120641976Z" level=info msg="Pulling image: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=b26e35bb-5cfd-4e28-bf65-42b7dbb7bad1 name=/runtime.v1.ImageService/PullImage
	Sep 04 06:59:21 old-k8s-version-869290 crio[682]: time="2025-09-04 06:59:21.121913178Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 04 06:59:30 old-k8s-version-869290 crio[682]: time="2025-09-04 06:59:30.119376990Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=8b86bde8-57ce-4674-93af-160c299aa72b name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:59:30 old-k8s-version-869290 crio[682]: time="2025-09-04 06:59:30.119747427Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=8b86bde8-57ce-4674-93af-160c299aa72b name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:59:45 old-k8s-version-869290 crio[682]: time="2025-09-04 06:59:45.119536727Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=efdc4fe1-0cba-4ede-9542-8d54a38f91fa name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:59:45 old-k8s-version-869290 crio[682]: time="2025-09-04 06:59:45.119766458Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=efdc4fe1-0cba-4ede-9542-8d54a38f91fa name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:59:59 old-k8s-version-869290 crio[682]: time="2025-09-04 06:59:59.120276651Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=fd136206-18c8-4b7f-9f53-7298dea73302 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:59:59 old-k8s-version-869290 crio[682]: time="2025-09-04 06:59:59.120504904Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=fd136206-18c8-4b7f-9f53-7298dea73302 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:00:02 old-k8s-version-869290 crio[682]: time="2025-09-04 07:00:02.119687955Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=77b9d7d7-2192-4a82-8af5-49ba0cffff59 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:00:02 old-k8s-version-869290 crio[682]: time="2025-09-04 07:00:02.120088597Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=77b9d7d7-2192-4a82-8af5-49ba0cffff59 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:00:13 old-k8s-version-869290 crio[682]: time="2025-09-04 07:00:13.120196793Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=c8154c66-ba9e-4acb-b69f-b0d73da208ad name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:00:13 old-k8s-version-869290 crio[682]: time="2025-09-04 07:00:13.120420490Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=c8154c66-ba9e-4acb-b69f-b0d73da208ad name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:00:16 old-k8s-version-869290 crio[682]: time="2025-09-04 07:00:16.120161946Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=ab6f1bfb-0731-4ba3-9d44-81f49aa8cd8a name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:00:16 old-k8s-version-869290 crio[682]: time="2025-09-04 07:00:16.120516168Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=ab6f1bfb-0731-4ba3-9d44-81f49aa8cd8a name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:00:28 old-k8s-version-869290 crio[682]: time="2025-09-04 07:00:28.120030431Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=f526c1bf-f68c-489b-80d4-1e65701209fe name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:00:28 old-k8s-version-869290 crio[682]: time="2025-09-04 07:00:28.120312930Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=f526c1bf-f68c-489b-80d4-1e65701209fe name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:00:29 old-k8s-version-869290 crio[682]: time="2025-09-04 07:00:29.119660132Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=dd9354df-05ed-4325-bdaa-adea1b53cc62 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:00:29 old-k8s-version-869290 crio[682]: time="2025-09-04 07:00:29.119973113Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=dd9354df-05ed-4325-bdaa-adea1b53cc62 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	5b5fc5ba35f79       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   2 minutes ago       Exited              dashboard-metrics-scraper   6                   e09a6ababe5c0       dashboard-metrics-scraper-5f989dc9cf-b8rrc
	190aec8c45b0f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner         2                   ef28d474b8abd       storage-provisioner
	b91e293d6f376       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c   9 minutes ago       Running             busybox                     1                   f607dd984555f       busybox
	ad32663a51f8a       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                     1                   17968ce457a9c       coredns-5dd5756b68-plrdh
	619bf3076c8f2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Exited              storage-provisioner         1                   ef28d474b8abd       storage-provisioner
	5fd80a4de7446       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a   9 minutes ago       Running             kube-proxy                  1                   5411a53c1e3ce       kube-proxy-mk95k
	be0827961faeb       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   9 minutes ago       Running             kindnet-cni                 1                   9d4ac574b6b95       kindnet-qt2lt
	216c4f395c622       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                        1                   f4dc1f328e53f       etcd-old-k8s-version-869290
	77f69d5438aa2       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95   9 minutes ago       Running             kube-apiserver              1                   933ae54e980db       kube-apiserver-old-k8s-version-869290
	c52fac654a506       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157   9 minutes ago       Running             kube-scheduler              1                   8cc5327484762       kube-scheduler-old-k8s-version-869290
	3cc2cf8e6bb3d       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62   9 minutes ago       Running             kube-controller-manager     1                   a8fe31e4f0451       kube-controller-manager-old-k8s-version-869290
	
	
	==> coredns [ad32663a51f8a226fee8527c4055d4e037a41fda7996a7fcd753ad350a4e0410] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:48154 - 60154 "HINFO IN 6168828961770051816.2673140864784376398. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025063103s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-869290
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-869290
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3fa37de45a2901b215fab008201edf72ce5a1ff
	                    minikube.k8s.io/name=old-k8s-version-869290
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_04T06_49_52_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Sep 2025 06:49:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-869290
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Sep 2025 07:00:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Sep 2025 06:57:00 +0000   Thu, 04 Sep 2025 06:49:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Sep 2025 06:57:00 +0000   Thu, 04 Sep 2025 06:49:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Sep 2025 06:57:00 +0000   Thu, 04 Sep 2025 06:49:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Sep 2025 06:57:00 +0000   Thu, 04 Sep 2025 06:50:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-869290
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 d1b2795dc83a4f7ba901f1f8ac9725e1
	  System UUID:                9a3a2904-3fd2-42f5-8dd5-d48ec28a2076
	  Boot ID:                    04ef57f1-30be-45a2-b84c-b20b1e806bda
	  Kernel Version:             5.15.0-1083-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-5dd5756b68-plrdh                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     10m
	  kube-system                 etcd-old-k8s-version-869290                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kindnet-qt2lt                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-old-k8s-version-869290             250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-old-k8s-version-869290    200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-mk95k                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-old-k8s-version-869290             100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-57f55c9bc5-9q8f6                   100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         10m
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-b8rrc        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m29s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-ctkhj             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 9m40s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x9 over 10m)      kubelet          Node old-k8s-version-869290 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node old-k8s-version-869290 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node old-k8s-version-869290 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node old-k8s-version-869290 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node old-k8s-version-869290 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node old-k8s-version-869290 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                    node-controller  Node old-k8s-version-869290 event: Registered Node old-k8s-version-869290 in Controller
	  Normal  NodeReady                10m                    kubelet          Node old-k8s-version-869290 status is now: NodeReady
	  Normal  Starting                 9m47s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m47s (x8 over 9m47s)  kubelet          Node old-k8s-version-869290 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m47s (x8 over 9m47s)  kubelet          Node old-k8s-version-869290 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m47s (x8 over 9m47s)  kubelet          Node old-k8s-version-869290 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m29s                  node-controller  Node old-k8s-version-869290 event: Registered Node old-k8s-version-869290 in Controller
	
	
	==> dmesg <==
	[  +0.000001] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +2.011770] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000001] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.000003] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000001] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +1.535866] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-806214837f28
	[  +0.000006] ll header: 00000000: d2 82 15 b6 69 3f aa b7 ff 9e ed 42 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-806214837f28
	[  +0.000001] ll header: 00000000: d2 82 15 b6 69 3f aa b7 ff 9e ed 42 08 00
	[  +0.003918] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-806214837f28
	[  +0.000006] ll header: 00000000: d2 82 15 b6 69 3f aa b7 ff 9e ed 42 08 00
	[  +2.555764] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000006] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.000023] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000004] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000001] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +8.191102] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000008] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.003970] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000005] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000002] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	
	
	==> etcd [216c4f395c622e64c119af83270d04476d7dff81ddcf948d0e2caa7e660d9156] <==
	{"level":"info","ts":"2025-09-04T06:50:51.518749Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-09-04T06:50:51.518456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-09-04T06:50:51.520011Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-09-04T06:50:51.520152Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-04T06:50:51.520193Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-04T06:50:51.522629Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-09-04T06:50:51.522855Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-09-04T06:50:51.522888Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-09-04T06:50:51.523009Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-09-04T06:50:51.523019Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-09-04T06:50:53.103377Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-09-04T06:50:53.103435Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-09-04T06:50:53.10347Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-09-04T06:50:53.10349Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-09-04T06:50:53.103496Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-09-04T06:50:53.103507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-09-04T06:50:53.103514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-09-04T06:50:53.104427Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-869290 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-09-04T06:50:53.104487Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-04T06:50:53.104593Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-09-04T06:50:53.104616Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-09-04T06:50:53.104478Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-04T06:50:53.105823Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-09-04T06:50:53.105824Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-09-04T06:52:49.25707Z","caller":"traceutil/trace.go:171","msg":"trace[109768138] transaction","detail":"{read_only:false; response_revision:777; number_of_response:1; }","duration":"130.585589ms","start":"2025-09-04T06:52:49.126448Z","end":"2025-09-04T06:52:49.257033Z","steps":["trace[109768138] 'process raft request'  (duration: 85.9ms)","trace[109768138] 'compare'  (duration: 44.476392ms)"],"step_count":2}
	
	
	==> kernel <==
	 07:00:37 up  4:43,  0 users,  load average: 0.71, 1.30, 1.70
	Linux old-k8s-version-869290 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [be0827961faeb2668d30a2191b7355359dc8d4c3c703ad7443cff934d506cb72] <==
	I0904 06:58:36.704405       1 main.go:301] handling current node
	I0904 06:58:46.708472       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 06:58:46.708510       1 main.go:301] handling current node
	I0904 06:58:56.708713       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 06:58:56.708765       1 main.go:301] handling current node
	I0904 06:59:06.701144       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 06:59:06.701188       1 main.go:301] handling current node
	I0904 06:59:16.700985       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 06:59:16.701026       1 main.go:301] handling current node
	I0904 06:59:26.708559       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 06:59:26.708599       1 main.go:301] handling current node
	I0904 06:59:36.703387       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 06:59:36.703421       1 main.go:301] handling current node
	I0904 06:59:46.707871       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 06:59:46.707920       1 main.go:301] handling current node
	I0904 06:59:56.708992       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 06:59:56.709032       1 main.go:301] handling current node
	I0904 07:00:06.701004       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 07:00:06.701046       1 main.go:301] handling current node
	I0904 07:00:16.701443       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 07:00:16.701496       1 main.go:301] handling current node
	I0904 07:00:26.708076       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 07:00:26.708110       1 main.go:301] handling current node
	I0904 07:00:36.700988       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 07:00:36.701026       1 main.go:301] handling current node
	
	
	==> kube-apiserver [77f69d5438aa2072ffdf6b91b3958e71249533445cfb6477abdfb8612bf08489] <==
	E0904 06:55:55.409468       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0904 06:55:55.410422       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0904 06:56:54.147937       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.100.204.177:443: connect: connection refused
	I0904 06:56:54.147967       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0904 06:56:55.409986       1 handler_proxy.go:93] no RequestInfo found in the context
	E0904 06:56:55.410029       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0904 06:56:55.410039       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0904 06:56:55.411155       1 handler_proxy.go:93] no RequestInfo found in the context
	E0904 06:56:55.411239       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0904 06:56:55.411252       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0904 06:57:54.147980       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.100.204.177:443: connect: connection refused
	I0904 06:57:54.148007       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0904 06:58:54.148249       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.100.204.177:443: connect: connection refused
	I0904 06:58:54.148277       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0904 06:58:55.410944       1 handler_proxy.go:93] no RequestInfo found in the context
	E0904 06:58:55.411001       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0904 06:58:55.411011       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0904 06:58:55.412153       1 handler_proxy.go:93] no RequestInfo found in the context
	E0904 06:58:55.412235       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0904 06:58:55.412243       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0904 06:59:54.148557       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.100.204.177:443: connect: connection refused
	I0904 06:59:54.148584       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [3cc2cf8e6bb3d7c6e97880baf7fee195f6522eec30032ed014e472ba43b31616] <==
	I0904 06:56:08.943411       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0904 06:56:38.498619       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0904 06:56:38.951667       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0904 06:56:44.129991       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="134.021µs"
	I0904 06:56:59.129550       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="128.89µs"
	E0904 06:57:08.503626       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0904 06:57:08.959440       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0904 06:57:38.508121       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0904 06:57:38.966508       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0904 06:57:47.074275       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="190.628µs"
	I0904 06:57:49.181956       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="178.368µs"
	I0904 06:57:50.130381       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="130.849µs"
	I0904 06:58:01.130711       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="85.554µs"
	E0904 06:58:08.512651       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0904 06:58:08.974322       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0904 06:58:38.517795       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0904 06:58:38.981470       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0904 06:59:08.523618       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0904 06:59:08.988045       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0904 06:59:38.528421       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0904 06:59:38.994546       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0904 07:00:02.129699       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="131.614µs"
	E0904 07:00:08.532975       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0904 07:00:09.001409       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0904 07:00:16.130170       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="100.049µs"
	
	
	==> kube-proxy [5fd80a4de7446b801c5330df8fed98c34cc77d6bd01abc2aa9e5b5bb8d8015bd] <==
	I0904 06:50:56.422436       1 server_others.go:69] "Using iptables proxy"
	I0904 06:50:56.501444       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I0904 06:50:56.534461       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0904 06:50:56.536511       1 server_others.go:152] "Using iptables Proxier"
	I0904 06:50:56.536540       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0904 06:50:56.536547       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0904 06:50:56.536602       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0904 06:50:56.536960       1 server.go:846] "Version info" version="v1.28.0"
	I0904 06:50:56.537005       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 06:50:56.538082       1 config.go:188] "Starting service config controller"
	I0904 06:50:56.538113       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0904 06:50:56.538172       1 config.go:315] "Starting node config controller"
	I0904 06:50:56.538182       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0904 06:50:56.539580       1 config.go:97] "Starting endpoint slice config controller"
	I0904 06:50:56.539617       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0904 06:50:56.638442       1 shared_informer.go:318] Caches are synced for node config
	I0904 06:50:56.638538       1 shared_informer.go:318] Caches are synced for service config
	I0904 06:50:56.640074       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [c52fac654a5067fa08334b4a0d9d11c862aee02eb14d5aca97e094d63b613e72] <==
	I0904 06:50:52.162424       1 serving.go:348] Generated self-signed cert in-memory
	W0904 06:50:54.316382       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0904 06:50:54.316503       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0904 06:50:54.316545       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0904 06:50:54.316597       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0904 06:50:54.418583       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I0904 06:50:54.418614       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 06:50:54.420280       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 06:50:54.420323       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0904 06:50:54.421325       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0904 06:50:54.421494       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0904 06:50:54.521505       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 04 06:59:16 old-k8s-version-869290 kubelet[830]: E0904 06:59:16.120570     830 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9q8f6" podUID="e7c200f7-57db-4a22-a85a-a2c52168cce0"
	Sep 04 06:59:23 old-k8s-version-869290 kubelet[830]: I0904 06:59:23.119017     830 scope.go:117] "RemoveContainer" containerID="5b5fc5ba35f7970f4141e145c7feb43bc751dc456c95e955279ecd288fa3ad9a"
	Sep 04 06:59:23 old-k8s-version-869290 kubelet[830]: E0904 06:59:23.119299     830 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-b8rrc_kubernetes-dashboard(590c1fe6-6767-404a-aed6-f3e0ae9cc472)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8rrc" podUID="590c1fe6-6767-404a-aed6-f3e0ae9cc472"
	Sep 04 06:59:30 old-k8s-version-869290 kubelet[830]: E0904 06:59:30.120142     830 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9q8f6" podUID="e7c200f7-57db-4a22-a85a-a2c52168cce0"
	Sep 04 06:59:36 old-k8s-version-869290 kubelet[830]: I0904 06:59:36.119349     830 scope.go:117] "RemoveContainer" containerID="5b5fc5ba35f7970f4141e145c7feb43bc751dc456c95e955279ecd288fa3ad9a"
	Sep 04 06:59:36 old-k8s-version-869290 kubelet[830]: E0904 06:59:36.119689     830 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-b8rrc_kubernetes-dashboard(590c1fe6-6767-404a-aed6-f3e0ae9cc472)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8rrc" podUID="590c1fe6-6767-404a-aed6-f3e0ae9cc472"
	Sep 04 06:59:45 old-k8s-version-869290 kubelet[830]: E0904 06:59:45.120111     830 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9q8f6" podUID="e7c200f7-57db-4a22-a85a-a2c52168cce0"
	Sep 04 06:59:51 old-k8s-version-869290 kubelet[830]: I0904 06:59:51.118860     830 scope.go:117] "RemoveContainer" containerID="5b5fc5ba35f7970f4141e145c7feb43bc751dc456c95e955279ecd288fa3ad9a"
	Sep 04 06:59:51 old-k8s-version-869290 kubelet[830]: E0904 06:59:51.119292     830 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-b8rrc_kubernetes-dashboard(590c1fe6-6767-404a-aed6-f3e0ae9cc472)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8rrc" podUID="590c1fe6-6767-404a-aed6-f3e0ae9cc472"
	Sep 04 06:59:51 old-k8s-version-869290 kubelet[830]: E0904 06:59:51.194233     830 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 04 06:59:51 old-k8s-version-869290 kubelet[830]: E0904 06:59:51.194290     830 kuberuntime_image.go:53] "Failed to pull image" err="reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 04 06:59:51 old-k8s-version-869290 kubelet[830]: E0904 06:59:51.194436     830 kuberuntime_manager.go:1209] container &Container{Name:kubernetes-dashboard,Image:docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Command:[],Args:[--namespace=kubernetes-dashboard --enable-skip-login --disable-settings-authorizer],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:9090,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-volume,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-d8fhm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 9090 },Host:,Scheme:HTTP,HTTP
Headers:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:*2001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kubernetes-dashboard-8694d4445c-ctkhj_kubernetes-dashboard(191398b6-c62e-4c25-9bed-1fea30f5fed5): ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have re
ached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Sep 04 06:59:51 old-k8s-version-869290 kubelet[830]: E0904 06:59:51.194496     830 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-ctkhj" podUID="191398b6-c62e-4c25-9bed-1fea30f5fed5"
	Sep 04 06:59:59 old-k8s-version-869290 kubelet[830]: E0904 06:59:59.120861     830 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9q8f6" podUID="e7c200f7-57db-4a22-a85a-a2c52168cce0"
	Sep 04 07:00:02 old-k8s-version-869290 kubelet[830]: E0904 07:00:02.120351     830 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-ctkhj" podUID="191398b6-c62e-4c25-9bed-1fea30f5fed5"
	Sep 04 07:00:03 old-k8s-version-869290 kubelet[830]: I0904 07:00:03.119436     830 scope.go:117] "RemoveContainer" containerID="5b5fc5ba35f7970f4141e145c7feb43bc751dc456c95e955279ecd288fa3ad9a"
	Sep 04 07:00:03 old-k8s-version-869290 kubelet[830]: E0904 07:00:03.119746     830 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-b8rrc_kubernetes-dashboard(590c1fe6-6767-404a-aed6-f3e0ae9cc472)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8rrc" podUID="590c1fe6-6767-404a-aed6-f3e0ae9cc472"
	Sep 04 07:00:13 old-k8s-version-869290 kubelet[830]: E0904 07:00:13.120740     830 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9q8f6" podUID="e7c200f7-57db-4a22-a85a-a2c52168cce0"
	Sep 04 07:00:16 old-k8s-version-869290 kubelet[830]: I0904 07:00:16.119847     830 scope.go:117] "RemoveContainer" containerID="5b5fc5ba35f7970f4141e145c7feb43bc751dc456c95e955279ecd288fa3ad9a"
	Sep 04 07:00:16 old-k8s-version-869290 kubelet[830]: E0904 07:00:16.120215     830 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-b8rrc_kubernetes-dashboard(590c1fe6-6767-404a-aed6-f3e0ae9cc472)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8rrc" podUID="590c1fe6-6767-404a-aed6-f3e0ae9cc472"
	Sep 04 07:00:16 old-k8s-version-869290 kubelet[830]: E0904 07:00:16.120775     830 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-ctkhj" podUID="191398b6-c62e-4c25-9bed-1fea30f5fed5"
	Sep 04 07:00:27 old-k8s-version-869290 kubelet[830]: I0904 07:00:27.119824     830 scope.go:117] "RemoveContainer" containerID="5b5fc5ba35f7970f4141e145c7feb43bc751dc456c95e955279ecd288fa3ad9a"
	Sep 04 07:00:27 old-k8s-version-869290 kubelet[830]: E0904 07:00:27.120110     830 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-b8rrc_kubernetes-dashboard(590c1fe6-6767-404a-aed6-f3e0ae9cc472)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8rrc" podUID="590c1fe6-6767-404a-aed6-f3e0ae9cc472"
	Sep 04 07:00:28 old-k8s-version-869290 kubelet[830]: E0904 07:00:28.120596     830 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9q8f6" podUID="e7c200f7-57db-4a22-a85a-a2c52168cce0"
	Sep 04 07:00:29 old-k8s-version-869290 kubelet[830]: E0904 07:00:29.120327     830 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-ctkhj" podUID="191398b6-c62e-4c25-9bed-1fea30f5fed5"
	
	
	==> storage-provisioner [190aec8c45b0f19b4d7b202a54b0635d07eba13a1a6554ec4a037d1f8b416ed5] <==
	I0904 06:51:27.355253       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0904 06:51:27.362417       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0904 06:51:27.362449       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0904 06:51:44.755737       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0904 06:51:44.755841       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4bd924ae-d481-49b4-af7b-7da5f8f31cc5", APIVersion:"v1", ResourceVersion:"685", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-869290_0e5432b1-255e-42fb-9770-8fe9480f71a8 became leader
	I0904 06:51:44.755925       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-869290_0e5432b1-255e-42fb-9770-8fe9480f71a8!
	I0904 06:51:44.856210       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-869290_0e5432b1-255e-42fb-9770-8fe9480f71a8!
	
	
	==> storage-provisioner [619bf3076c8f2810f712ad4979d9483bea3ce02acaf717d24aa9ec66120b9bcb] <==
	I0904 06:50:56.405711       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0904 06:51:26.408292       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-869290 -n old-k8s-version-869290
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-869290 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-57f55c9bc5-9q8f6 kubernetes-dashboard-8694d4445c-ctkhj
helpers_test.go:282: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context old-k8s-version-869290 describe pod metrics-server-57f55c9bc5-9q8f6 kubernetes-dashboard-8694d4445c-ctkhj
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context old-k8s-version-869290 describe pod metrics-server-57f55c9bc5-9q8f6 kubernetes-dashboard-8694d4445c-ctkhj: exit status 1 (61.876629ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-9q8f6" not found
	Error from server (NotFound): pods "kubernetes-dashboard-8694d4445c-ctkhj" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context old-k8s-version-869290 describe pod metrics-server-57f55c9bc5-9q8f6 kubernetes-dashboard-8694d4445c-ctkhj: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-rf2hg" [0a81ba81-116f-4a44-ab32-2b3c88744009] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-574576 -n no-preload-574576
start_stop_delete_test.go:272: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-09-04 07:00:52.375100298 +0000 UTC m=+3643.164130929
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context no-preload-574576 describe po kubernetes-dashboard-855c9754f9-rf2hg -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context no-preload-574576 describe po kubernetes-dashboard-855c9754f9-rf2hg -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-rf2hg
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             no-preload-574576/192.168.85.2
Start Time:       Thu, 04 Sep 2025 06:51:20 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tl5hg (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-tl5hg:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  9m32s                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rf2hg to no-preload-574576
Warning  Failed     7m21s                   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": loading manifest for target platform: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    4m39s (x5 over 9m32s)   kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     4m9s (x4 over 8m59s)    kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     4m9s (x5 over 8m59s)    kubelet            Error: ErrImagePull
Warning  Failed     2m47s (x16 over 8m58s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    105s (x21 over 8m58s)   kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context no-preload-574576 logs kubernetes-dashboard-855c9754f9-rf2hg -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context no-preload-574576 logs kubernetes-dashboard-855c9754f9-rf2hg -n kubernetes-dashboard: exit status 1 (72.931865ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-rf2hg" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context no-preload-574576 logs kubernetes-dashboard-855c9754f9-rf2hg -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-574576
helpers_test.go:243: (dbg) docker inspect no-preload-574576:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1e2279782717f025e39ad8467676395819231d93016b9e8139858b4d2d72b2b2",
	        "Created": "2025-09-04T06:49:50.879365265Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1775251,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-04T06:51:03.89103056Z",
	            "FinishedAt": "2025-09-04T06:51:03.125518292Z"
	        },
	        "Image": "sha256:6f7d8b3ae805e64eb4efe058a75d43d384fe5989473cee7f8e24ea90eca28309",
	        "ResolvConfPath": "/var/lib/docker/containers/1e2279782717f025e39ad8467676395819231d93016b9e8139858b4d2d72b2b2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1e2279782717f025e39ad8467676395819231d93016b9e8139858b4d2d72b2b2/hostname",
	        "HostsPath": "/var/lib/docker/containers/1e2279782717f025e39ad8467676395819231d93016b9e8139858b4d2d72b2b2/hosts",
	        "LogPath": "/var/lib/docker/containers/1e2279782717f025e39ad8467676395819231d93016b9e8139858b4d2d72b2b2/1e2279782717f025e39ad8467676395819231d93016b9e8139858b4d2d72b2b2-json.log",
	        "Name": "/no-preload-574576",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-574576:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-574576",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1e2279782717f025e39ad8467676395819231d93016b9e8139858b4d2d72b2b2",
	                "LowerDir": "/var/lib/docker/overlay2/7c6f0b0f0b456f106f7785e42901c4a1fddb7aed999e4717209f60fdb8d4249f-init/diff:/var/lib/docker/overlay2/00af8677cb60c76ca825d07bd2d1267a5f0b2d8d1147a86a8eb7a1b8e0189af8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7c6f0b0f0b456f106f7785e42901c4a1fddb7aed999e4717209f60fdb8d4249f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7c6f0b0f0b456f106f7785e42901c4a1fddb7aed999e4717209f60fdb8d4249f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7c6f0b0f0b456f106f7785e42901c4a1fddb7aed999e4717209f60fdb8d4249f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-574576",
	                "Source": "/var/lib/docker/volumes/no-preload-574576/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-574576",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-574576",
	                "name.minikube.sigs.k8s.io": "no-preload-574576",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8aaa5a9c79bdf30a2cfa11cab0def2c8da5a2b1a89c15fab8d940ae32a5268ae",
	            "SandboxKey": "/var/run/docker/netns/8aaa5a9c79bd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34259"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34260"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34263"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34261"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34262"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-574576": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6a:18:65:52:11:64",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "512820bef1773b08fe7e32736d062562ad1b1adf8c8167147e68a5a3f69d7a8c",
	                    "EndpointID": "a7499aa96833efe822b154cf596a5437ccf18250c43f14c80d2d82618082223f",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-574576",
	                        "1e2279782717"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-574576 -n no-preload-574576
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-574576 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-574576 logs -n 25: (1.211540911s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p no-preload-574576 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                       │ no-preload-574576            │ jenkins │ v1.36.0 │ 04 Sep 25 06:49 UTC │ 04 Sep 25 06:50 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-869290 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-869290       │ jenkins │ v1.36.0 │ 04 Sep 25 06:50 UTC │ 04 Sep 25 06:50 UTC │
	│ stop    │ -p old-k8s-version-869290 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-869290       │ jenkins │ v1.36.0 │ 04 Sep 25 06:50 UTC │ 04 Sep 25 06:50 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-869290 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-869290       │ jenkins │ v1.36.0 │ 04 Sep 25 06:50 UTC │ 04 Sep 25 06:50 UTC │
	│ start   │ -p old-k8s-version-869290 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-869290       │ jenkins │ v1.36.0 │ 04 Sep 25 06:50 UTC │ 04 Sep 25 06:51 UTC │
	│ addons  │ enable metrics-server -p no-preload-574576 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-574576            │ jenkins │ v1.36.0 │ 04 Sep 25 06:50 UTC │ 04 Sep 25 06:50 UTC │
	│ stop    │ -p no-preload-574576 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-574576            │ jenkins │ v1.36.0 │ 04 Sep 25 06:50 UTC │ 04 Sep 25 06:51 UTC │
	│ addons  │ enable dashboard -p no-preload-574576 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-574576            │ jenkins │ v1.36.0 │ 04 Sep 25 06:51 UTC │ 04 Sep 25 06:51 UTC │
	│ start   │ -p no-preload-574576 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                       │ no-preload-574576            │ jenkins │ v1.36.0 │ 04 Sep 25 06:51 UTC │ 04 Sep 25 06:51 UTC │
	│ start   │ -p cert-expiration-620042 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-620042       │ jenkins │ v1.36.0 │ 04 Sep 25 06:51 UTC │ 04 Sep 25 06:51 UTC │
	│ delete  │ -p cert-expiration-620042                                                                                                                                                                                                                     │ cert-expiration-620042       │ jenkins │ v1.36.0 │ 04 Sep 25 06:51 UTC │ 04 Sep 25 06:52 UTC │
	│ start   │ -p embed-certs-589812 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                        │ embed-certs-589812           │ jenkins │ v1.36.0 │ 04 Sep 25 06:52 UTC │ 04 Sep 25 06:53 UTC │
	│ start   │ -p kubernetes-upgrade-892549 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-892549    │ jenkins │ v1.36.0 │ 04 Sep 25 06:52 UTC │                     │
	│ start   │ -p kubernetes-upgrade-892549 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-892549    │ jenkins │ v1.36.0 │ 04 Sep 25 06:52 UTC │ 04 Sep 25 06:52 UTC │
	│ delete  │ -p kubernetes-upgrade-892549                                                                                                                                                                                                                  │ kubernetes-upgrade-892549    │ jenkins │ v1.36.0 │ 04 Sep 25 06:52 UTC │ 04 Sep 25 06:52 UTC │
	│ delete  │ -p disable-driver-mounts-393542                                                                                                                                                                                                               │ disable-driver-mounts-393542 │ jenkins │ v1.36.0 │ 04 Sep 25 06:52 UTC │ 04 Sep 25 06:52 UTC │
	│ start   │ -p default-k8s-diff-port-520775 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-520775 │ jenkins │ v1.36.0 │ 04 Sep 25 06:52 UTC │ 04 Sep 25 06:53 UTC │
	│ addons  │ enable metrics-server -p embed-certs-589812 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-589812           │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:53 UTC │
	│ stop    │ -p embed-certs-589812 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-589812           │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:53 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-520775 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-520775 │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:53 UTC │
	│ stop    │ -p default-k8s-diff-port-520775 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-520775 │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:53 UTC │
	│ addons  │ enable dashboard -p embed-certs-589812 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-589812           │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:53 UTC │
	│ start   │ -p embed-certs-589812 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                        │ embed-certs-589812           │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:54 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-520775 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-520775 │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:53 UTC │
	│ start   │ -p default-k8s-diff-port-520775 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-520775 │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:54 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/04 06:53:49
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 06:53:49.418555 1796928 out.go:360] Setting OutFile to fd 1 ...
	I0904 06:53:49.418725 1796928 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:53:49.418774 1796928 out.go:374] Setting ErrFile to fd 2...
	I0904 06:53:49.418785 1796928 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:53:49.419117 1796928 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1516970/.minikube/bin
	I0904 06:53:49.419985 1796928 out.go:368] Setting JSON to false
	I0904 06:53:49.421632 1796928 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":16579,"bootTime":1756952250,"procs":289,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 06:53:49.421749 1796928 start.go:140] virtualization: kvm guest
	I0904 06:53:49.423972 1796928 out.go:179] * [default-k8s-diff-port-520775] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 06:53:49.425842 1796928 notify.go:220] Checking for updates...
	I0904 06:53:49.425850 1796928 out.go:179]   - MINIKUBE_LOCATION=21409
	I0904 06:53:49.427436 1796928 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 06:53:49.428783 1796928 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-1516970/kubeconfig
	I0904 06:53:49.429989 1796928 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1516970/.minikube
	I0904 06:53:49.431134 1796928 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 06:53:49.432406 1796928 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 06:53:49.434250 1796928 config.go:182] Loaded profile config "default-k8s-diff-port-520775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 06:53:49.435089 1796928 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 06:53:49.462481 1796928 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0904 06:53:49.462577 1796928 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 06:53:49.536244 1796928 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:74 SystemTime:2025-09-04 06:53:49.525128821 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 06:53:49.536390 1796928 docker.go:318] overlay module found
	I0904 06:53:49.539526 1796928 out.go:179] * Using the docker driver based on existing profile
	I0904 06:53:49.540719 1796928 start.go:304] selected driver: docker
	I0904 06:53:49.540734 1796928 start.go:918] validating driver "docker" against &{Name:default-k8s-diff-port-520775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-520775 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountS
tring: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 06:53:49.540822 1796928 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 06:53:49.541681 1796928 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 06:53:49.594566 1796928 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:74 SystemTime:2025-09-04 06:53:49.585030944 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 06:53:49.595064 1796928 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 06:53:49.595111 1796928 cni.go:84] Creating CNI manager for ""
	I0904 06:53:49.595174 1796928 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0904 06:53:49.595223 1796928 start.go:348] cluster config:
	{Name:default-k8s-diff-port-520775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-520775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 06:53:49.597216 1796928 out.go:179] * Starting "default-k8s-diff-port-520775" primary control-plane node in "default-k8s-diff-port-520775" cluster
	I0904 06:53:49.598401 1796928 cache.go:123] Beginning downloading kic base image for docker with crio
	I0904 06:53:49.599526 1796928 out.go:179] * Pulling base image v0.0.47-1756936034-21409 ...
	I0904 06:53:49.604882 1796928 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 06:53:49.604957 1796928 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21409-1516970/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0904 06:53:49.604977 1796928 cache.go:58] Caching tarball of preloaded images
	I0904 06:53:49.604992 1796928 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc in local docker daemon
	I0904 06:53:49.605104 1796928 preload.go:172] Found /home/jenkins/minikube-integration/21409-1516970/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0904 06:53:49.605123 1796928 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0904 06:53:49.605341 1796928 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/config.json ...
	I0904 06:53:49.637613 1796928 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc in local docker daemon, skipping pull
	I0904 06:53:49.637635 1796928 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc exists in daemon, skipping load
	I0904 06:53:49.637647 1796928 cache.go:232] Successfully downloaded all kic artifacts
	I0904 06:53:49.637673 1796928 start.go:360] acquireMachinesLock for default-k8s-diff-port-520775: {Name:mkd2b36988a85f8d5c3a19497a99007da8aadae2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 06:53:49.637729 1796928 start.go:364] duration metric: took 33.006µs to acquireMachinesLock for "default-k8s-diff-port-520775"
	I0904 06:53:49.637749 1796928 start.go:96] Skipping create...Using existing machine configuration
	I0904 06:53:49.637756 1796928 fix.go:54] fixHost starting: 
	I0904 06:53:49.637963 1796928 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-520775 --format={{.State.Status}}
	I0904 06:53:49.656941 1796928 fix.go:112] recreateIfNeeded on default-k8s-diff-port-520775: state=Stopped err=<nil>
	W0904 06:53:49.656986 1796928 fix.go:138] unexpected machine state, will restart: <nil>
	I0904 06:53:49.524554 1794879 node_ready.go:49] node "embed-certs-589812" is "Ready"
	I0904 06:53:49.524655 1794879 node_ready.go:38] duration metric: took 3.407781482s for node "embed-certs-589812" to be "Ready" ...
	I0904 06:53:49.524688 1794879 api_server.go:52] waiting for apiserver process to appear ...
	I0904 06:53:49.524773 1794879 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 06:53:51.714274 1794879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.110482825s)
	I0904 06:53:51.714323 1794879 addons.go:479] Verifying addon metrics-server=true in "embed-certs-589812"
	I0904 06:53:51.714427 1794879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.971633666s)
	I0904 06:53:51.714457 1794879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (4.901617894s)
	I0904 06:53:51.714590 1794879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.702133151s)
	I0904 06:53:51.714600 1794879 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.189780106s)
	I0904 06:53:51.714619 1794879 api_server.go:72] duration metric: took 5.87883589s to wait for apiserver process to appear ...
	I0904 06:53:51.714626 1794879 api_server.go:88] waiting for apiserver healthz status ...
	I0904 06:53:51.714643 1794879 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0904 06:53:51.716342 1794879 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-589812 addons enable metrics-server
	
	I0904 06:53:51.722283 1794879 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 06:53:51.722308 1794879 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 06:53:51.730360 1794879 out.go:179] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	I0904 06:53:51.731942 1794879 addons.go:514] duration metric: took 5.89615636s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I0904 06:53:52.215034 1794879 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0904 06:53:52.219745 1794879 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 06:53:52.219786 1794879 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 06:53:52.715125 1794879 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0904 06:53:52.719686 1794879 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 06:53:52.719714 1794879 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 06:53:53.215303 1794879 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0904 06:53:53.219535 1794879 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0904 06:53:53.220593 1794879 api_server.go:141] control plane version: v1.34.0
	I0904 06:53:53.220626 1794879 api_server.go:131] duration metric: took 1.505992813s to wait for apiserver health ...
	I0904 06:53:53.220641 1794879 system_pods.go:43] waiting for kube-system pods to appear ...
	I0904 06:53:53.224544 1794879 system_pods.go:59] 9 kube-system pods found
	I0904 06:53:53.224588 1794879 system_pods.go:61] "coredns-66bc5c9577-j5gww" [e3612616-edf7-408c-8d20-966c456e4a43] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 06:53:53.224605 1794879 system_pods.go:61] "etcd-embed-certs-589812" [ffde7899-36bf-4837-8a40-30b11624fd40] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0904 06:53:53.224618 1794879 system_pods.go:61] "kindnet-wtgxv" [7570cefc-495d-4c68-83e5-04a04d12775a] Running
	I0904 06:53:53.224628 1794879 system_pods.go:61] "kube-apiserver-embed-certs-589812" [095a13f2-431a-46bd-a6b2-d9f475bd60cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 06:53:53.224640 1794879 system_pods.go:61] "kube-controller-manager-embed-certs-589812" [25e8105c-95a2-4761-a9a6-3e01225cde8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0904 06:53:53.224650 1794879 system_pods.go:61] "kube-proxy-xqvlx" [281c6535-72f3-429b-b4b1-df56cb3de2f5] Running
	I0904 06:53:53.224659 1794879 system_pods.go:61] "kube-scheduler-embed-certs-589812" [dbb61597-bbca-422b-b8b6-45821409cb91] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0904 06:53:53.224682 1794879 system_pods.go:61] "metrics-server-746fcd58dc-prlxr" [58b70501-6011-4b99-80ff-1f9b422ae481] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 06:53:53.224694 1794879 system_pods.go:61] "storage-provisioner" [df8bd0bd-3bd4-461e-b276-edf75af8897e] Running
	I0904 06:53:53.224704 1794879 system_pods.go:74] duration metric: took 4.053609ms to wait for pod list to return data ...
	I0904 06:53:53.224716 1794879 default_sa.go:34] waiting for default service account to be created ...
	I0904 06:53:53.227290 1794879 default_sa.go:45] found service account: "default"
	I0904 06:53:53.227311 1794879 default_sa.go:55] duration metric: took 2.585826ms for default service account to be created ...
	I0904 06:53:53.227319 1794879 system_pods.go:116] waiting for k8s-apps to be running ...
	I0904 06:53:53.230112 1794879 system_pods.go:86] 9 kube-system pods found
	I0904 06:53:53.230142 1794879 system_pods.go:89] "coredns-66bc5c9577-j5gww" [e3612616-edf7-408c-8d20-966c456e4a43] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 06:53:53.230154 1794879 system_pods.go:89] "etcd-embed-certs-589812" [ffde7899-36bf-4837-8a40-30b11624fd40] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0904 06:53:53.230162 1794879 system_pods.go:89] "kindnet-wtgxv" [7570cefc-495d-4c68-83e5-04a04d12775a] Running
	I0904 06:53:53.230172 1794879 system_pods.go:89] "kube-apiserver-embed-certs-589812" [095a13f2-431a-46bd-a6b2-d9f475bd60cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 06:53:53.230180 1794879 system_pods.go:89] "kube-controller-manager-embed-certs-589812" [25e8105c-95a2-4761-a9a6-3e01225cde8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0904 06:53:53.230191 1794879 system_pods.go:89] "kube-proxy-xqvlx" [281c6535-72f3-429b-b4b1-df56cb3de2f5] Running
	I0904 06:53:53.230201 1794879 system_pods.go:89] "kube-scheduler-embed-certs-589812" [dbb61597-bbca-422b-b8b6-45821409cb91] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0904 06:53:53.230212 1794879 system_pods.go:89] "metrics-server-746fcd58dc-prlxr" [58b70501-6011-4b99-80ff-1f9b422ae481] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 06:53:53.230218 1794879 system_pods.go:89] "storage-provisioner" [df8bd0bd-3bd4-461e-b276-edf75af8897e] Running
	I0904 06:53:53.230227 1794879 system_pods.go:126] duration metric: took 2.90283ms to wait for k8s-apps to be running ...
	I0904 06:53:53.230240 1794879 system_svc.go:44] waiting for kubelet service to be running ....
	I0904 06:53:53.230287 1794879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 06:53:53.241829 1794879 system_svc.go:56] duration metric: took 11.584133ms WaitForService to wait for kubelet
	I0904 06:53:53.241853 1794879 kubeadm.go:578] duration metric: took 7.406070053s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 06:53:53.241869 1794879 node_conditions.go:102] verifying NodePressure condition ...
	I0904 06:53:53.244406 1794879 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0904 06:53:53.244445 1794879 node_conditions.go:123] node cpu capacity is 8
	I0904 06:53:53.244459 1794879 node_conditions.go:105] duration metric: took 2.584951ms to run NodePressure ...
	I0904 06:53:53.244478 1794879 start.go:241] waiting for startup goroutines ...
	I0904 06:53:53.244492 1794879 start.go:246] waiting for cluster config update ...
	I0904 06:53:53.244509 1794879 start.go:255] writing updated cluster config ...
	I0904 06:53:53.244784 1794879 ssh_runner.go:195] Run: rm -f paused
	I0904 06:53:53.248131 1794879 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 06:53:53.251511 1794879 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-j5gww" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:53:49.659280 1796928 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-520775" ...
	I0904 06:53:49.659366 1796928 cli_runner.go:164] Run: docker start default-k8s-diff-port-520775
	I0904 06:53:49.944765 1796928 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-520775 --format={{.State.Status}}
	I0904 06:53:49.965484 1796928 kic.go:430] container "default-k8s-diff-port-520775" state is running.
	I0904 06:53:49.965966 1796928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-520775
	I0904 06:53:49.984536 1796928 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/config.json ...
	I0904 06:53:49.984754 1796928 machine.go:93] provisionDockerMachine start ...
	I0904 06:53:49.984828 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:50.006739 1796928 main.go:141] libmachine: Using SSH client type: native
	I0904 06:53:50.007122 1796928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 34279 <nil> <nil>}
	I0904 06:53:50.007149 1796928 main.go:141] libmachine: About to run SSH command:
	hostname
	I0904 06:53:50.011282 1796928 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0904 06:53:53.135459 1796928 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-520775
	
	I0904 06:53:53.135490 1796928 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-520775"
	I0904 06:53:53.135560 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:53.153046 1796928 main.go:141] libmachine: Using SSH client type: native
	I0904 06:53:53.153307 1796928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 34279 <nil> <nil>}
	I0904 06:53:53.153323 1796928 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-520775 && echo "default-k8s-diff-port-520775" | sudo tee /etc/hostname
	I0904 06:53:53.284177 1796928 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-520775
	
	I0904 06:53:53.284278 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:53.302854 1796928 main.go:141] libmachine: Using SSH client type: native
	I0904 06:53:53.303062 1796928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 34279 <nil> <nil>}
	I0904 06:53:53.303082 1796928 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-520775' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-520775/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-520775' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0904 06:53:53.428269 1796928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 06:53:53.428306 1796928 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-1516970/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-1516970/.minikube}
	I0904 06:53:53.428357 1796928 ubuntu.go:190] setting up certificates
	I0904 06:53:53.428381 1796928 provision.go:84] configureAuth start
	I0904 06:53:53.428449 1796928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-520775
	I0904 06:53:53.447935 1796928 provision.go:143] copyHostCerts
	I0904 06:53:53.448036 1796928 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.pem, removing ...
	I0904 06:53:53.448051 1796928 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.pem
	I0904 06:53:53.448113 1796928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.pem (1082 bytes)
	I0904 06:53:53.448215 1796928 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1516970/.minikube/cert.pem, removing ...
	I0904 06:53:53.448223 1796928 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1516970/.minikube/cert.pem
	I0904 06:53:53.448247 1796928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-1516970/.minikube/cert.pem (1123 bytes)
	I0904 06:53:53.448320 1796928 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1516970/.minikube/key.pem, removing ...
	I0904 06:53:53.448326 1796928 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1516970/.minikube/key.pem
	I0904 06:53:53.448347 1796928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-1516970/.minikube/key.pem (1675 bytes)
	I0904 06:53:53.448409 1796928 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-520775 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-520775 localhost minikube]
	I0904 06:53:53.540900 1796928 provision.go:177] copyRemoteCerts
	I0904 06:53:53.540966 1796928 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0904 06:53:53.541003 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:53.558727 1796928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/default-k8s-diff-port-520775/id_rsa Username:docker}
	I0904 06:53:53.650335 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0904 06:53:53.677813 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0904 06:53:53.700987 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0904 06:53:53.724318 1796928 provision.go:87] duration metric: took 295.918548ms to configureAuth
	I0904 06:53:53.724345 1796928 ubuntu.go:206] setting minikube options for container-runtime
	I0904 06:53:53.724529 1796928 config.go:182] Loaded profile config "default-k8s-diff-port-520775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 06:53:53.724626 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:53.743241 1796928 main.go:141] libmachine: Using SSH client type: native
	I0904 06:53:53.743467 1796928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 34279 <nil> <nil>}
	I0904 06:53:53.743488 1796928 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0904 06:53:54.045106 1796928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0904 06:53:54.045134 1796928 machine.go:96] duration metric: took 4.060362432s to provisionDockerMachine
	I0904 06:53:54.045148 1796928 start.go:293] postStartSetup for "default-k8s-diff-port-520775" (driver="docker")
	I0904 06:53:54.045187 1796928 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0904 06:53:54.045256 1796928 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0904 06:53:54.045307 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:54.064198 1796928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/default-k8s-diff-port-520775/id_rsa Username:docker}
	I0904 06:53:54.152873 1796928 ssh_runner.go:195] Run: cat /etc/os-release
	I0904 06:53:54.156293 1796928 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0904 06:53:54.156319 1796928 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0904 06:53:54.156326 1796928 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0904 06:53:54.156333 1796928 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0904 06:53:54.156345 1796928 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1516970/.minikube/addons for local assets ...
	I0904 06:53:54.156399 1796928 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1516970/.minikube/files for local assets ...
	I0904 06:53:54.156481 1796928 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-1516970/.minikube/files/etc/ssl/certs/15207162.pem -> 15207162.pem in /etc/ssl/certs
	I0904 06:53:54.156610 1796928 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0904 06:53:54.165073 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/files/etc/ssl/certs/15207162.pem --> /etc/ssl/certs/15207162.pem (1708 bytes)
	I0904 06:53:54.187780 1796928 start.go:296] duration metric: took 142.614938ms for postStartSetup
	I0904 06:53:54.187887 1796928 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 06:53:54.187937 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:54.205683 1796928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/default-k8s-diff-port-520775/id_rsa Username:docker}
	I0904 06:53:54.292859 1796928 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0904 06:53:54.297265 1796928 fix.go:56] duration metric: took 4.65950064s for fixHost
	I0904 06:53:54.297289 1796928 start.go:83] releasing machines lock for "default-k8s-diff-port-520775", held for 4.659549727s
	I0904 06:53:54.297358 1796928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-520775
	I0904 06:53:54.315327 1796928 ssh_runner.go:195] Run: cat /version.json
	I0904 06:53:54.315393 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:54.315420 1796928 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0904 06:53:54.315484 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:54.335338 1796928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/default-k8s-diff-port-520775/id_rsa Username:docker}
	I0904 06:53:54.336109 1796928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/default-k8s-diff-port-520775/id_rsa Username:docker}
	I0904 06:53:54.493584 1796928 ssh_runner.go:195] Run: systemctl --version
	I0904 06:53:54.498345 1796928 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0904 06:53:54.638467 1796928 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0904 06:53:54.642924 1796928 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 06:53:54.652284 1796928 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0904 06:53:54.652347 1796928 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 06:53:54.660849 1796928 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0904 06:53:54.660875 1796928 start.go:495] detecting cgroup driver to use...
	I0904 06:53:54.660913 1796928 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0904 06:53:54.660966 1796928 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0904 06:53:54.672418 1796928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 06:53:54.683134 1796928 docker.go:218] disabling cri-docker service (if available) ...
	I0904 06:53:54.683181 1796928 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0904 06:53:54.695400 1796928 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0904 06:53:54.706646 1796928 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0904 06:53:54.793740 1796928 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0904 06:53:54.873854 1796928 docker.go:234] disabling docker service ...
	I0904 06:53:54.873933 1796928 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0904 06:53:54.885885 1796928 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0904 06:53:54.896737 1796928 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0904 06:53:54.980788 1796928 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0904 06:53:55.057730 1796928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0904 06:53:55.068310 1796928 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 06:53:55.083683 1796928 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0904 06:53:55.083736 1796928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:53:55.093158 1796928 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0904 06:53:55.093215 1796928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:53:55.102672 1796928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:53:55.113082 1796928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:53:55.122399 1796928 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0904 06:53:55.131334 1796928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:53:55.140602 1796928 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:53:55.150009 1796928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:53:55.159908 1796928 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0904 06:53:55.167649 1796928 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0904 06:53:55.175680 1796928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 06:53:55.254239 1796928 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0904 06:53:55.362926 1796928 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0904 06:53:55.363001 1796928 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0904 06:53:55.366648 1796928 start.go:563] Will wait 60s for crictl version
	I0904 06:53:55.366695 1796928 ssh_runner.go:195] Run: which crictl
	I0904 06:53:55.369962 1796928 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0904 06:53:55.403453 1796928 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0904 06:53:55.403538 1796928 ssh_runner.go:195] Run: crio --version
	I0904 06:53:55.441474 1796928 ssh_runner.go:195] Run: crio --version
	I0904 06:53:55.479608 1796928 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0904 06:53:55.480915 1796928 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-520775 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0904 06:53:55.497935 1796928 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0904 06:53:55.502150 1796928 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 06:53:55.514295 1796928 kubeadm.go:875] updating cluster {Name:default-k8s-diff-port-520775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-520775 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0904 06:53:55.514485 1796928 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 06:53:55.514556 1796928 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 06:53:55.564218 1796928 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 06:53:55.564245 1796928 crio.go:433] Images already preloaded, skipping extraction
	I0904 06:53:55.564292 1796928 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 06:53:55.602409 1796928 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 06:53:55.602436 1796928 cache_images.go:85] Images are preloaded, skipping loading
	I0904 06:53:55.602446 1796928 kubeadm.go:926] updating node { 192.168.103.2 8444 v1.34.0 crio true true} ...
	I0904 06:53:55.602577 1796928 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-520775 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-520775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0904 06:53:55.602645 1796928 ssh_runner.go:195] Run: crio config
	I0904 06:53:55.664543 1796928 cni.go:84] Creating CNI manager for ""
	I0904 06:53:55.664570 1796928 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0904 06:53:55.664584 1796928 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0904 06:53:55.664612 1796928 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-520775 NodeName:default-k8s-diff-port-520775 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0904 06:53:55.664768 1796928 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-520775"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0904 06:53:55.664845 1796928 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0904 06:53:55.673590 1796928 binaries.go:44] Found k8s binaries, skipping transfer
	I0904 06:53:55.673661 1796928 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0904 06:53:55.682016 1796928 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0904 06:53:55.699448 1796928 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0904 06:53:55.717472 1796928 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I0904 06:53:55.734579 1796928 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0904 06:53:55.737941 1796928 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 06:53:55.748899 1796928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 06:53:55.834506 1796928 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 06:53:55.848002 1796928 certs.go:68] Setting up /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775 for IP: 192.168.103.2
	I0904 06:53:55.848028 1796928 certs.go:194] generating shared ca certs ...
	I0904 06:53:55.848048 1796928 certs.go:226] acquiring lock for ca certs: {Name:mk2d06825c36f44304767b415a9a93c84edb2667 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:53:55.848186 1796928 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.key
	I0904 06:53:55.848228 1796928 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/proxy-client-ca.key
	I0904 06:53:55.848237 1796928 certs.go:256] generating profile certs ...
	I0904 06:53:55.848310 1796928 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/client.key
	I0904 06:53:55.848365 1796928 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/apiserver.key.6ec15110
	I0904 06:53:55.848406 1796928 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/proxy-client.key
	I0904 06:53:55.848517 1796928 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/1520716.pem (1338 bytes)
	W0904 06:53:55.848547 1796928 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/1520716_empty.pem, impossibly tiny 0 bytes
	I0904 06:53:55.848556 1796928 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca-key.pem (1675 bytes)
	I0904 06:53:55.848578 1796928 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca.pem (1082 bytes)
	I0904 06:53:55.848601 1796928 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/cert.pem (1123 bytes)
	I0904 06:53:55.848627 1796928 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/key.pem (1675 bytes)
	I0904 06:53:55.848669 1796928 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/files/etc/ssl/certs/15207162.pem (1708 bytes)
	I0904 06:53:55.849251 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0904 06:53:55.876639 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0904 06:53:55.904012 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0904 06:53:55.936371 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0904 06:53:56.018233 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0904 06:53:56.041340 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0904 06:53:56.065911 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0904 06:53:56.089737 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0904 06:53:56.112935 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/files/etc/ssl/certs/15207162.pem --> /usr/share/ca-certificates/15207162.pem (1708 bytes)
	I0904 06:53:56.138060 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0904 06:53:56.162385 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/1520716.pem --> /usr/share/ca-certificates/1520716.pem (1338 bytes)
	I0904 06:53:56.185546 1796928 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0904 06:53:56.202891 1796928 ssh_runner.go:195] Run: openssl version
	I0904 06:53:56.208611 1796928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15207162.pem && ln -fs /usr/share/ca-certificates/15207162.pem /etc/ssl/certs/15207162.pem"
	I0904 06:53:56.219865 1796928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15207162.pem
	I0904 06:53:56.223785 1796928 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  4 06:07 /usr/share/ca-certificates/15207162.pem
	I0904 06:53:56.223867 1796928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15207162.pem
	I0904 06:53:56.231657 1796928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15207162.pem /etc/ssl/certs/3ec20f2e.0"
	I0904 06:53:56.243527 1796928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0904 06:53:56.253334 1796928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0904 06:53:56.257449 1796928 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  4 06:00 /usr/share/ca-certificates/minikubeCA.pem
	I0904 06:53:56.257517 1796928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0904 06:53:56.264253 1796928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0904 06:53:56.273629 1796928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1520716.pem && ln -fs /usr/share/ca-certificates/1520716.pem /etc/ssl/certs/1520716.pem"
	I0904 06:53:56.283120 1796928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1520716.pem
	I0904 06:53:56.286378 1796928 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  4 06:07 /usr/share/ca-certificates/1520716.pem
	I0904 06:53:56.286450 1796928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1520716.pem
	I0904 06:53:56.293207 1796928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1520716.pem /etc/ssl/certs/51391683.0"
	I0904 06:53:56.301668 1796928 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0904 06:53:56.308006 1796928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0904 06:53:56.315155 1796928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0904 06:53:56.322059 1796928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0904 06:53:56.329568 1796928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0904 06:53:56.337737 1796928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0904 06:53:56.345511 1796928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0904 06:53:56.353351 1796928 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-520775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-520775 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 06:53:56.353482 1796928 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0904 06:53:56.353539 1796928 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0904 06:53:56.397941 1796928 cri.go:89] found id: ""
	I0904 06:53:56.398012 1796928 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0904 06:53:56.408886 1796928 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0904 06:53:56.408981 1796928 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0904 06:53:56.409041 1796928 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0904 06:53:56.424530 1796928 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0904 06:53:56.425727 1796928 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-520775" does not appear in /home/jenkins/minikube-integration/21409-1516970/kubeconfig
	I0904 06:53:56.426580 1796928 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-1516970/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-520775" cluster setting kubeconfig missing "default-k8s-diff-port-520775" context setting]
	I0904 06:53:56.427949 1796928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1516970/kubeconfig: {Name:mkbb6a6dae4a65ddd44b276a5562dc0a264116a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:53:56.430031 1796928 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0904 06:53:56.444430 1796928 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.103.2
	I0904 06:53:56.444470 1796928 kubeadm.go:593] duration metric: took 35.478353ms to restartPrimaryControlPlane
	I0904 06:53:56.444481 1796928 kubeadm.go:394] duration metric: took 91.143305ms to StartCluster
	I0904 06:53:56.444503 1796928 settings.go:142] acquiring lock: {Name:mk2d1c8a569b62879275d6daa2b799b595d6bca2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:53:56.444560 1796928 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-1516970/kubeconfig
	I0904 06:53:56.447245 1796928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1516970/kubeconfig: {Name:mkbb6a6dae4a65ddd44b276a5562dc0a264116a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:53:56.447495 1796928 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0904 06:53:56.447711 1796928 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0904 06:53:56.447836 1796928 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-520775"
	I0904 06:53:56.447860 1796928 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-520775"
	W0904 06:53:56.447868 1796928 addons.go:247] addon storage-provisioner should already be in state true
	I0904 06:53:56.447888 1796928 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-520775"
	I0904 06:53:56.447903 1796928 host.go:66] Checking if "default-k8s-diff-port-520775" exists ...
	I0904 06:53:56.447928 1796928 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-520775"
	I0904 06:53:56.447921 1796928 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-520775"
	I0904 06:53:56.447939 1796928 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-520775"
	I0904 06:53:56.447970 1796928 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-520775"
	I0904 06:53:56.447970 1796928 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-520775"
	W0904 06:53:56.447979 1796928 addons.go:247] addon dashboard should already be in state true
	I0904 06:53:56.447980 1796928 config.go:182] Loaded profile config "default-k8s-diff-port-520775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	W0904 06:53:56.447982 1796928 addons.go:247] addon metrics-server should already be in state true
	I0904 06:53:56.448017 1796928 host.go:66] Checking if "default-k8s-diff-port-520775" exists ...
	I0904 06:53:56.448020 1796928 host.go:66] Checking if "default-k8s-diff-port-520775" exists ...
	I0904 06:53:56.448276 1796928 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-520775 --format={{.State.Status}}
	I0904 06:53:56.448431 1796928 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-520775 --format={{.State.Status}}
	I0904 06:53:56.448473 1796928 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-520775 --format={{.State.Status}}
	I0904 06:53:56.448520 1796928 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-520775 --format={{.State.Status}}
	I0904 06:53:56.450093 1796928 out.go:179] * Verifying Kubernetes components...
	I0904 06:53:56.451389 1796928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 06:53:56.482390 1796928 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-520775"
	W0904 06:53:56.482412 1796928 addons.go:247] addon default-storageclass should already be in state true
	I0904 06:53:56.482437 1796928 host.go:66] Checking if "default-k8s-diff-port-520775" exists ...
	I0904 06:53:56.482730 1796928 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-520775 --format={{.State.Status}}
	I0904 06:53:56.485071 1796928 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 06:53:56.485089 1796928 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0904 06:53:56.488270 1796928 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 06:53:56.488294 1796928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0904 06:53:56.488355 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:56.490382 1796928 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0904 06:53:56.491521 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0904 06:53:56.491536 1796928 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0904 06:53:56.491584 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:56.496773 1796928 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	W0904 06:53:55.257485 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:53:57.757496 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	I0904 06:53:56.497920 1796928 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0904 06:53:56.497941 1796928 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0904 06:53:56.498005 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:56.511983 1796928 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0904 06:53:56.512010 1796928 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0904 06:53:56.512072 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:56.529596 1796928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/default-k8s-diff-port-520775/id_rsa Username:docker}
	I0904 06:53:56.531423 1796928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/default-k8s-diff-port-520775/id_rsa Username:docker}
	I0904 06:53:56.543761 1796928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/default-k8s-diff-port-520775/id_rsa Username:docker}
	I0904 06:53:56.547939 1796928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/default-k8s-diff-port-520775/id_rsa Username:docker}
	I0904 06:53:56.815518 1796928 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 06:53:56.824564 1796928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 06:53:56.900475 1796928 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-520775" to be "Ready" ...
	I0904 06:53:56.903122 1796928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0904 06:53:56.915401 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0904 06:53:56.915439 1796928 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0904 06:53:57.011674 1796928 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0904 06:53:57.011705 1796928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0904 06:53:57.025890 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0904 06:53:57.025929 1796928 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0904 06:53:57.130640 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0904 06:53:57.130669 1796928 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0904 06:53:57.201935 1796928 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0904 06:53:57.201971 1796928 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	W0904 06:53:57.228446 1796928 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 06:53:57.228496 1796928 retry.go:31] will retry after 331.542893ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0904 06:53:57.228576 1796928 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 06:53:57.228595 1796928 retry.go:31] will retry after 234.661911ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 06:53:57.233201 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0904 06:53:57.233235 1796928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0904 06:53:57.312449 1796928 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0904 06:53:57.312483 1796928 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0904 06:53:57.335196 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0904 06:53:57.335296 1796928 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0904 06:53:57.340794 1796928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0904 06:53:57.423747 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0904 06:53:57.423855 1796928 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0904 06:53:57.464378 1796928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0904 06:53:57.517739 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0904 06:53:57.517836 1796928 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0904 06:53:57.560380 1796928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 06:53:57.621494 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0904 06:53:57.621580 1796928 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0904 06:53:57.719817 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0904 06:53:57.719851 1796928 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0904 06:53:57.808921 1796928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0904 06:54:00.222294 1796928 node_ready.go:49] node "default-k8s-diff-port-520775" is "Ready"
	I0904 06:54:00.222393 1796928 node_ready.go:38] duration metric: took 3.321861305s for node "default-k8s-diff-port-520775" to be "Ready" ...
	I0904 06:54:00.222414 1796928 api_server.go:52] waiting for apiserver process to appear ...
	I0904 06:54:00.222514 1796928 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 06:54:02.420531 1796928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.07964965s)
	I0904 06:54:02.420574 1796928 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-520775"
	I0904 06:54:02.420586 1796928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (4.956118872s)
	I0904 06:54:02.420682 1796928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.860244874s)
	I0904 06:54:02.420925 1796928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.611964012s)
	I0904 06:54:02.420956 1796928 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.198413181s)
	I0904 06:54:02.421147 1796928 api_server.go:72] duration metric: took 5.973615373s to wait for apiserver process to appear ...
	I0904 06:54:02.421161 1796928 api_server.go:88] waiting for apiserver healthz status ...
	I0904 06:54:02.421181 1796928 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I0904 06:54:02.422911 1796928 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-520775 addons enable metrics-server
	
	I0904 06:54:02.426397 1796928 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 06:54:02.426463 1796928 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 06:54:02.428576 1796928 out.go:179] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	W0904 06:53:59.759069 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:54:02.258100 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	I0904 06:54:02.429861 1796928 addons.go:514] duration metric: took 5.982154586s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I0904 06:54:02.921448 1796928 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I0904 06:54:02.926218 1796928 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 06:54:02.926239 1796928 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 06:54:03.421924 1796928 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I0904 06:54:03.427035 1796928 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I0904 06:54:03.428103 1796928 api_server.go:141] control plane version: v1.34.0
	I0904 06:54:03.428127 1796928 api_server.go:131] duration metric: took 1.006959868s to wait for apiserver health ...
	I0904 06:54:03.428136 1796928 system_pods.go:43] waiting for kube-system pods to appear ...
	I0904 06:54:03.434471 1796928 system_pods.go:59] 9 kube-system pods found
	I0904 06:54:03.434508 1796928 system_pods.go:61] "coredns-66bc5c9577-hm47q" [e73fad8a-ad1b-475f-a4ea-bfda49587ae8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 06:54:03.434519 1796928 system_pods.go:61] "etcd-default-k8s-diff-port-520775" [5829ac4b-ff8b-4d46-9be9-0947be850651] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0904 06:54:03.434525 1796928 system_pods.go:61] "kindnet-wz7lg" [8e231614-2126-4bd8-b77d-a4e98bfbcd0b] Running
	I0904 06:54:03.434533 1796928 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-520775" [95d6a6b9-81f2-48b3-8343-289600b99b2a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 06:54:03.434544 1796928 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-520775" [69053048-8fce-4b4b-8df8-a8f7415bf602] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0904 06:54:03.434564 1796928 system_pods.go:61] "kube-proxy-zrlrh" [df5878ee-bf16-4a99-894c-1f83484bbc3b] Running
	I0904 06:54:03.434573 1796928 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-520775" [e52ed283-6545-4336-8d7a-e26c18f54b4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0904 06:54:03.434586 1796928 system_pods.go:61] "metrics-server-746fcd58dc-gws8j" [16bf9326-2429-4d6b-a6ed-6dc44262c35e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 06:54:03.434594 1796928 system_pods.go:61] "storage-provisioner" [0f88021c-f0ad-4130-8cb1-06f073f45244] Running
	I0904 06:54:03.434602 1796928 system_pods.go:74] duration metric: took 6.460113ms to wait for pod list to return data ...
	I0904 06:54:03.434614 1796928 default_sa.go:34] waiting for default service account to be created ...
	I0904 06:54:03.437095 1796928 default_sa.go:45] found service account: "default"
	I0904 06:54:03.437116 1796928 default_sa.go:55] duration metric: took 2.49678ms for default service account to be created ...
	I0904 06:54:03.437124 1796928 system_pods.go:116] waiting for k8s-apps to be running ...
	I0904 06:54:03.439954 1796928 system_pods.go:86] 9 kube-system pods found
	I0904 06:54:03.439997 1796928 system_pods.go:89] "coredns-66bc5c9577-hm47q" [e73fad8a-ad1b-475f-a4ea-bfda49587ae8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 06:54:03.440010 1796928 system_pods.go:89] "etcd-default-k8s-diff-port-520775" [5829ac4b-ff8b-4d46-9be9-0947be850651] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0904 06:54:03.440018 1796928 system_pods.go:89] "kindnet-wz7lg" [8e231614-2126-4bd8-b77d-a4e98bfbcd0b] Running
	I0904 06:54:03.440029 1796928 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-520775" [95d6a6b9-81f2-48b3-8343-289600b99b2a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 06:54:03.440043 1796928 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-520775" [69053048-8fce-4b4b-8df8-a8f7415bf602] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0904 06:54:03.440053 1796928 system_pods.go:89] "kube-proxy-zrlrh" [df5878ee-bf16-4a99-894c-1f83484bbc3b] Running
	I0904 06:54:03.440060 1796928 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-520775" [e52ed283-6545-4336-8d7a-e26c18f54b4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0904 06:54:03.440072 1796928 system_pods.go:89] "metrics-server-746fcd58dc-gws8j" [16bf9326-2429-4d6b-a6ed-6dc44262c35e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 06:54:03.440078 1796928 system_pods.go:89] "storage-provisioner" [0f88021c-f0ad-4130-8cb1-06f073f45244] Running
	I0904 06:54:03.440085 1796928 system_pods.go:126] duration metric: took 2.955ms to wait for k8s-apps to be running ...
	I0904 06:54:03.440100 1796928 system_svc.go:44] waiting for kubelet service to be running ....
	I0904 06:54:03.440162 1796928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 06:54:03.451705 1796928 system_svc.go:56] duration metric: took 11.594555ms WaitForService to wait for kubelet
	I0904 06:54:03.451731 1796928 kubeadm.go:578] duration metric: took 7.004201759s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 06:54:03.451748 1796928 node_conditions.go:102] verifying NodePressure condition ...
	I0904 06:54:03.455005 1796928 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0904 06:54:03.455036 1796928 node_conditions.go:123] node cpu capacity is 8
	I0904 06:54:03.455062 1796928 node_conditions.go:105] duration metric: took 3.308068ms to run NodePressure ...
	I0904 06:54:03.455079 1796928 start.go:241] waiting for startup goroutines ...
	I0904 06:54:03.455095 1796928 start.go:246] waiting for cluster config update ...
	I0904 06:54:03.455112 1796928 start.go:255] writing updated cluster config ...
	I0904 06:54:03.455408 1796928 ssh_runner.go:195] Run: rm -f paused
	I0904 06:54:03.458944 1796928 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 06:54:03.462665 1796928 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hm47q" in "kube-system" namespace to be "Ready" or be gone ...
	W0904 06:54:04.757792 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:54:07.257591 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:54:05.468478 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:07.500893 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:09.756895 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:54:12.257352 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:54:09.968652 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:12.468012 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:14.756854 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:54:17.256905 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:54:14.468746 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:16.967726 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:18.968373 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:19.257325 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:54:21.757694 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:54:20.968633 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:23.467871 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:24.256489 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	I0904 06:54:24.756710 1794879 pod_ready.go:94] pod "coredns-66bc5c9577-j5gww" is "Ready"
	I0904 06:54:24.756744 1794879 pod_ready.go:86] duration metric: took 31.505206553s for pod "coredns-66bc5c9577-j5gww" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:24.759357 1794879 pod_ready.go:83] waiting for pod "etcd-embed-certs-589812" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:24.763174 1794879 pod_ready.go:94] pod "etcd-embed-certs-589812" is "Ready"
	I0904 06:54:24.763194 1794879 pod_ready.go:86] duration metric: took 3.815458ms for pod "etcd-embed-certs-589812" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:24.765056 1794879 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-589812" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:24.768709 1794879 pod_ready.go:94] pod "kube-apiserver-embed-certs-589812" is "Ready"
	I0904 06:54:24.768729 1794879 pod_ready.go:86] duration metric: took 3.655905ms for pod "kube-apiserver-embed-certs-589812" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:24.770312 1794879 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-589812" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:24.955369 1794879 pod_ready.go:94] pod "kube-controller-manager-embed-certs-589812" is "Ready"
	I0904 06:54:24.955399 1794879 pod_ready.go:86] duration metric: took 185.06856ms for pod "kube-controller-manager-embed-certs-589812" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:25.155371 1794879 pod_ready.go:83] waiting for pod "kube-proxy-xqvlx" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:25.555016 1794879 pod_ready.go:94] pod "kube-proxy-xqvlx" is "Ready"
	I0904 06:54:25.555045 1794879 pod_ready.go:86] duration metric: took 399.644529ms for pod "kube-proxy-xqvlx" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:25.754864 1794879 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-589812" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:26.155740 1794879 pod_ready.go:94] pod "kube-scheduler-embed-certs-589812" is "Ready"
	I0904 06:54:26.155768 1794879 pod_ready.go:86] duration metric: took 400.874171ms for pod "kube-scheduler-embed-certs-589812" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:26.155779 1794879 pod_ready.go:40] duration metric: took 32.907618487s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 06:54:26.201526 1794879 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0904 06:54:26.203310 1794879 out.go:179] * Done! kubectl is now configured to use "embed-certs-589812" cluster and "default" namespace by default
	W0904 06:54:25.468180 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:27.468649 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:29.468703 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:31.967748 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:34.467966 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	I0904 06:54:36.468207 1796928 pod_ready.go:94] pod "coredns-66bc5c9577-hm47q" is "Ready"
	I0904 06:54:36.468238 1796928 pod_ready.go:86] duration metric: took 33.005546695s for pod "coredns-66bc5c9577-hm47q" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:36.470247 1796928 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-520775" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:36.474087 1796928 pod_ready.go:94] pod "etcd-default-k8s-diff-port-520775" is "Ready"
	I0904 06:54:36.474113 1796928 pod_ready.go:86] duration metric: took 3.802864ms for pod "etcd-default-k8s-diff-port-520775" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:36.476057 1796928 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-520775" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:36.479419 1796928 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-520775" is "Ready"
	I0904 06:54:36.479437 1796928 pod_ready.go:86] duration metric: took 3.359104ms for pod "kube-apiserver-default-k8s-diff-port-520775" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:36.481399 1796928 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-520775" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:36.666267 1796928 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-520775" is "Ready"
	I0904 06:54:36.666294 1796928 pod_ready.go:86] duration metric: took 184.873705ms for pod "kube-controller-manager-default-k8s-diff-port-520775" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:36.866510 1796928 pod_ready.go:83] waiting for pod "kube-proxy-zrlrh" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:37.266395 1796928 pod_ready.go:94] pod "kube-proxy-zrlrh" is "Ready"
	I0904 06:54:37.266428 1796928 pod_ready.go:86] duration metric: took 399.888589ms for pod "kube-proxy-zrlrh" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:37.466543 1796928 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-520775" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:37.866935 1796928 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-520775" is "Ready"
	I0904 06:54:37.866974 1796928 pod_ready.go:86] duration metric: took 400.403816ms for pod "kube-scheduler-default-k8s-diff-port-520775" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:37.866986 1796928 pod_ready.go:40] duration metric: took 34.408008083s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 06:54:37.912300 1796928 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0904 06:54:37.913920 1796928 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-520775" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 04 06:59:30 no-preload-574576 crio[666]: time="2025-09-04 06:59:30.237355057Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=2c236a0f-538b-41de-becc-6ca29b68094e name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:59:30 no-preload-574576 crio[666]: time="2025-09-04 06:59:30.237886530Z" level=info msg="Pulling image: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=d4fe329e-e743-4e74-b655-bcca994928be name=/runtime.v1.ImageService/PullImage
	Sep 04 06:59:30 no-preload-574576 crio[666]: time="2025-09-04 06:59:30.239089458Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 04 06:59:33 no-preload-574576 crio[666]: time="2025-09-04 06:59:33.240796125Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=c85c75c9-a13f-48dd-aed0-cc10ba519914 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:59:33 no-preload-574576 crio[666]: time="2025-09-04 06:59:33.241100168Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=c85c75c9-a13f-48dd-aed0-cc10ba519914 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:59:48 no-preload-574576 crio[666]: time="2025-09-04 06:59:48.236590612Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=dc10ee15-9525-4743-9e5e-35c5a121c1cc name=/runtime.v1.ImageService/ImageStatus
	Sep 04 06:59:48 no-preload-574576 crio[666]: time="2025-09-04 06:59:48.236886373Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=dc10ee15-9525-4743-9e5e-35c5a121c1cc name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:00:02 no-preload-574576 crio[666]: time="2025-09-04 07:00:02.235973617Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=64b003eb-5991-4367-9894-16d18f398de3 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:00:02 no-preload-574576 crio[666]: time="2025-09-04 07:00:02.236254920Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=64b003eb-5991-4367-9894-16d18f398de3 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:00:12 no-preload-574576 crio[666]: time="2025-09-04 07:00:12.236753464Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=b6cc311a-29fb-4b61-b2cc-50b83d8ef07c name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:00:12 no-preload-574576 crio[666]: time="2025-09-04 07:00:12.237091630Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=b6cc311a-29fb-4b61-b2cc-50b83d8ef07c name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:00:13 no-preload-574576 crio[666]: time="2025-09-04 07:00:13.235723618Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=4ae66c38-f86e-4761-b850-889b005f6841 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:00:13 no-preload-574576 crio[666]: time="2025-09-04 07:00:13.236047873Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=4ae66c38-f86e-4761-b850-889b005f6841 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:00:25 no-preload-574576 crio[666]: time="2025-09-04 07:00:25.236089434Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=14419994-e22c-4a30-a4c7-a9df02607c08 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:00:25 no-preload-574576 crio[666]: time="2025-09-04 07:00:25.236440738Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=14419994-e22c-4a30-a4c7-a9df02607c08 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:00:28 no-preload-574576 crio[666]: time="2025-09-04 07:00:28.236143602Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=609c8a21-1bca-4f9d-93f3-c533179270a7 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:00:28 no-preload-574576 crio[666]: time="2025-09-04 07:00:28.236428975Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=609c8a21-1bca-4f9d-93f3-c533179270a7 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:00:38 no-preload-574576 crio[666]: time="2025-09-04 07:00:38.236729503Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=8091141c-968e-4aa6-b438-fc58a8be1e88 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:00:38 no-preload-574576 crio[666]: time="2025-09-04 07:00:38.237060155Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=8091141c-968e-4aa6-b438-fc58a8be1e88 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:00:41 no-preload-574576 crio[666]: time="2025-09-04 07:00:41.236724750Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=52902e36-e3b0-4c43-80bc-b9ea074c1f6c name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:00:41 no-preload-574576 crio[666]: time="2025-09-04 07:00:41.236994235Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=52902e36-e3b0-4c43-80bc-b9ea074c1f6c name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:00:52 no-preload-574576 crio[666]: time="2025-09-04 07:00:52.235892814Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=6bdd8874-7270-4f6c-b356-13d659799e78 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:00:52 no-preload-574576 crio[666]: time="2025-09-04 07:00:52.236244551Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=6bdd8874-7270-4f6c-b356-13d659799e78 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:00:53 no-preload-574576 crio[666]: time="2025-09-04 07:00:53.235703512Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=5d741edc-5821-4f4e-9e41-8102c9fcceb9 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:00:53 no-preload-574576 crio[666]: time="2025-09-04 07:00:53.236035355Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=5d741edc-5821-4f4e-9e41-8102c9fcceb9 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	de7c78dcffd6a       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   3 minutes ago       Exited              dashboard-metrics-scraper   6                   99b9a9cdc190d       dashboard-metrics-scraper-6ffb444bf9-wm46d
	c8136e0896839       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner         2                   d0ee403e6035f       storage-provisioner
	a21465d8b7fdd       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   9 minutes ago       Running             coredns                     1                   408c41dd6d4e9       coredns-66bc5c9577-g4ljx
	6bca85b30355c       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c   9 minutes ago       Running             busybox                     1                   8f97a902d3bce       busybox
	350de3861b1dc       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   9 minutes ago       Running             kindnet-cni                 1                   b36df929f7b38       kindnet-w6frr
	739d378171e97       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   9 minutes ago       Running             kube-proxy                  1                   2f70166fd9f50       kube-proxy-9mbq6
	55592e1198d59       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Exited              storage-provisioner         1                   d0ee403e6035f       storage-provisioner
	0a2bb5e07e675       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   9 minutes ago       Running             etcd                        1                   4f56ae4464038       etcd-no-preload-574576
	bca0ae139442e       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   9 minutes ago       Running             kube-apiserver              1                   e64edcea25c8c       kube-apiserver-no-preload-574576
	6781db6486f53       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   9 minutes ago       Running             kube-scheduler              1                   aa66d60fa2806       kube-scheduler-no-preload-574576
	b8bcf79ea0251       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   9 minutes ago       Running             kube-controller-manager     1                   11502392a0613       kube-controller-manager-no-preload-574576
	
	
	==> coredns [a21465d8b7fddb1579125e0031a25e9e42476eb09b47d3c11f86cb5f968a86a6] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37281 - 7213 "HINFO IN 7968160076350310455.3401697822833025019. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.044332656s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-574576
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-574576
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3fa37de45a2901b215fab008201edf72ce5a1ff
	                    minikube.k8s.io/name=no-preload-574576
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_04T06_50_20_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Sep 2025 06:50:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-574576
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Sep 2025 07:00:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Sep 2025 06:58:43 +0000   Thu, 04 Sep 2025 06:50:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Sep 2025 06:58:43 +0000   Thu, 04 Sep 2025 06:50:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Sep 2025 06:58:43 +0000   Thu, 04 Sep 2025 06:50:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Sep 2025 06:58:43 +0000   Thu, 04 Sep 2025 06:50:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-574576
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 d5052de359b54ec3a3ddba9267f3f8f8
	  System UUID:                008625d3-fb91-460f-8e35-73af0d41b639
	  Boot ID:                    04ef57f1-30be-45a2-b84c-b20b1e806bda
	  Kernel Version:             5.15.0-1083-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-g4ljx                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     10m
	  kube-system                 etcd-no-preload-574576                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kindnet-w6frr                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-no-preload-574576              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-no-preload-574576     200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-9mbq6                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-no-preload-574576              100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-746fcd58dc-7qmkr               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         10m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-wm46d    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m33s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-rf2hg         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   Starting                 9m37s                  kube-proxy       
	  Normal   NodeHasSufficientPID     10m                    kubelet          Node no-preload-574576 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 10m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m                    kubelet          Node no-preload-574576 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m                    kubelet          Node no-preload-574576 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 10m                    kubelet          Starting kubelet.
	  Normal   RegisteredNode           10m                    node-controller  Node no-preload-574576 event: Registered Node no-preload-574576 in Controller
	  Normal   NodeReady                10m                    kubelet          Node no-preload-574576 status is now: NodeReady
	  Normal   Starting                 9m43s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m43s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m43s (x8 over 9m43s)  kubelet          Node no-preload-574576 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m43s (x8 over 9m43s)  kubelet          Node no-preload-574576 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m43s (x8 over 9m43s)  kubelet          Node no-preload-574576 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m34s                  node-controller  Node no-preload-574576 event: Registered Node no-preload-574576 in Controller
	
	
	==> dmesg <==
	[  +0.000001] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +2.011770] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000001] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.000003] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000001] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +1.535866] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-806214837f28
	[  +0.000006] ll header: 00000000: d2 82 15 b6 69 3f aa b7 ff 9e ed 42 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-806214837f28
	[  +0.000001] ll header: 00000000: d2 82 15 b6 69 3f aa b7 ff 9e ed 42 08 00
	[  +0.003918] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-806214837f28
	[  +0.000006] ll header: 00000000: d2 82 15 b6 69 3f aa b7 ff 9e ed 42 08 00
	[  +2.555764] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000006] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.000023] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000004] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000001] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +8.191102] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000008] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.003970] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000005] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000002] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	
	
	==> etcd [0a2bb5e07e675a06d7d5365f4ea46671cdd16bdeeefe39c7a4a4d25750de1c68] <==
	{"level":"warn","ts":"2025-09-04T06:51:13.324220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:51:13.330943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:51:13.347945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:51:13.353775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:51:13.360785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:51:13.400037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:51:13.406438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:51:13.413499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:51:13.419824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:51:13.425947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:51:13.432855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:51:13.439381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:51:13.446504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:51:13.476087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:51:13.479424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:51:13.503390Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:51:13.509547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:51:13.556727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58348","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-04T06:52:04.359170Z","caller":"traceutil/trace.go:172","msg":"trace[719735697] transaction","detail":"{read_only:false; response_revision:675; number_of_response:1; }","duration":"119.061808ms","start":"2025-09-04T06:52:04.240087Z","end":"2025-09-04T06:52:04.359149Z","steps":["trace[719735697] 'process raft request'  (duration: 118.955483ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T06:52:05.626053Z","caller":"traceutil/trace.go:172","msg":"trace[918571184] transaction","detail":"{read_only:false; response_revision:680; number_of_response:1; }","duration":"166.645055ms","start":"2025-09-04T06:52:05.459388Z","end":"2025-09-04T06:52:05.626033Z","steps":["trace[918571184] 'process raft request'  (duration: 83.482308ms)","trace[918571184] 'compare'  (duration: 83.040109ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-04T06:52:48.384673Z","caller":"traceutil/trace.go:172","msg":"trace[836855457] linearizableReadLoop","detail":"{readStateIndex:782; appliedIndex:782; }","duration":"132.164763ms","start":"2025-09-04T06:52:48.252486Z","end":"2025-09-04T06:52:48.384651Z","steps":["trace[836855457] 'read index received'  (duration: 132.156788ms)","trace[836855457] 'applied index is now lower than readState.Index'  (duration: 6.756µs)"],"step_count":2}
	{"level":"info","ts":"2025-09-04T06:52:48.384855Z","caller":"traceutil/trace.go:172","msg":"trace[1954956525] transaction","detail":"{read_only:false; response_revision:733; number_of_response:1; }","duration":"141.725704ms","start":"2025-09-04T06:52:48.243112Z","end":"2025-09-04T06:52:48.384838Z","steps":["trace[1954956525] 'process raft request'  (duration: 141.569723ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-04T06:52:48.384899Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.362913ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rf2hg.186201bca8ca4cab\" limit:1 ","response":"range_response_count:1 size:947"}
	{"level":"info","ts":"2025-09-04T06:52:48.384989Z","caller":"traceutil/trace.go:172","msg":"trace[1911627484] range","detail":"{range_begin:/registry/events/kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rf2hg.186201bca8ca4cab; range_end:; response_count:1; response_revision:732; }","duration":"132.502342ms","start":"2025-09-04T06:52:48.252475Z","end":"2025-09-04T06:52:48.384977Z","steps":["trace[1911627484] 'agreement among raft nodes before linearized reading'  (duration: 132.267398ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T06:52:50.274391Z","caller":"traceutil/trace.go:172","msg":"trace[1344793255] transaction","detail":"{read_only:false; response_revision:740; number_of_response:1; }","duration":"124.275044ms","start":"2025-09-04T06:52:50.150089Z","end":"2025-09-04T06:52:50.274364Z","steps":["trace[1344793255] 'process raft request'  (duration: 124.068613ms)"],"step_count":1}
	
	
	==> kernel <==
	 07:00:53 up  4:43,  0 users,  load average: 0.55, 1.24, 1.68
	Linux no-preload-574576 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [350de3861b1dc56b7e34601fd88f6d1ab9a8f3908d667be044393fda23dca64a] <==
	I0904 06:58:46.503944       1 main.go:301] handling current node
	I0904 06:58:56.510834       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0904 06:58:56.510865       1 main.go:301] handling current node
	I0904 06:59:06.504164       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0904 06:59:06.504195       1 main.go:301] handling current node
	I0904 06:59:16.511939       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0904 06:59:16.511973       1 main.go:301] handling current node
	I0904 06:59:26.511065       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0904 06:59:26.511106       1 main.go:301] handling current node
	I0904 06:59:36.503947       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0904 06:59:36.503982       1 main.go:301] handling current node
	I0904 06:59:46.511927       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0904 06:59:46.511959       1 main.go:301] handling current node
	I0904 06:59:56.507942       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0904 06:59:56.507980       1 main.go:301] handling current node
	I0904 07:00:06.503255       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0904 07:00:06.503293       1 main.go:301] handling current node
	I0904 07:00:16.505168       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0904 07:00:16.505201       1 main.go:301] handling current node
	I0904 07:00:26.503652       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0904 07:00:26.503702       1 main.go:301] handling current node
	I0904 07:00:36.508786       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0904 07:00:36.508967       1 main.go:301] handling current node
	I0904 07:00:46.503879       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0904 07:00:46.503912       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bca0ae139442e7d50d3cddbc0fc77c7d71f27421ae41c30357e7538da5f054bf] <==
	I0904 06:56:37.851876       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0904 06:57:15.225145       1 handler_proxy.go:99] no RequestInfo found in the context
	E0904 06:57:15.225190       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0904 06:57:15.225206       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0904 06:57:15.226311       1 handler_proxy.go:99] no RequestInfo found in the context
	E0904 06:57:15.226393       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0904 06:57:15.226404       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0904 06:57:15.856946       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 06:57:55.302836       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 06:58:38.089877       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 06:59:03.346716       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0904 06:59:15.225511       1 handler_proxy.go:99] no RequestInfo found in the context
	E0904 06:59:15.225565       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0904 06:59:15.225579       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0904 06:59:15.226610       1 handler_proxy.go:99] no RequestInfo found in the context
	E0904 06:59:15.226702       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0904 06:59:15.226733       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0904 06:59:43.622522       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 07:00:28.114301       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [b8bcf79ea02511930b5221e35df8b6b4b686e5b9f11a570db679378717f0b0a3] <==
	I0904 06:54:49.629645       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 06:55:19.599180       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 06:55:19.635902       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 06:55:49.603250       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 06:55:49.643328       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 06:56:19.608446       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 06:56:19.650094       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 06:56:49.612354       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 06:56:49.656828       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 06:57:19.617383       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 06:57:19.663573       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 06:57:49.621960       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 06:57:49.670965       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 06:58:19.626711       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 06:58:19.678321       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 06:58:49.631273       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 06:58:49.684654       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 06:59:19.635351       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 06:59:19.691511       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 06:59:49.639398       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 06:59:49.698471       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:00:19.644369       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:00:19.705288       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:00:49.649497       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:00:49.712552       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [739d378171e97dd327b3f332900b3b60caca10a991c55f3e28c636ae1afab805] <==
	I0904 06:51:16.144228       1 server_linux.go:53] "Using iptables proxy"
	I0904 06:51:16.317230       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0904 06:51:16.417386       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0904 06:51:16.417464       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0904 06:51:16.417579       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0904 06:51:16.437820       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0904 06:51:16.437885       1 server_linux.go:132] "Using iptables Proxier"
	I0904 06:51:16.441933       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0904 06:51:16.442308       1 server.go:527] "Version info" version="v1.34.0"
	I0904 06:51:16.442350       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 06:51:16.445117       1 config.go:200] "Starting service config controller"
	I0904 06:51:16.445133       1 config.go:403] "Starting serviceCIDR config controller"
	I0904 06:51:16.445143       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0904 06:51:16.445148       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0904 06:51:16.445134       1 config.go:106] "Starting endpoint slice config controller"
	I0904 06:51:16.445173       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0904 06:51:16.445245       1 config.go:309] "Starting node config controller"
	I0904 06:51:16.445283       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0904 06:51:16.445315       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0904 06:51:16.545975       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0904 06:51:16.546010       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0904 06:51:16.545995       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6781db6486f532f600b6522565654ef4da9df25769e534e5680c3d8ca37fa996] <==
	I0904 06:51:12.725113       1 serving.go:386] Generated self-signed cert in-memory
	W0904 06:51:14.216143       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0904 06:51:14.216244       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0904 06:51:14.216258       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0904 06:51:14.216268       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0904 06:51:14.418226       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0904 06:51:14.418267       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 06:51:14.500611       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 06:51:14.500662       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 06:51:14.501793       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0904 06:51:14.501953       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0904 06:51:14.601187       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 04 07:00:03 no-preload-574576 kubelet[802]: E0904 07:00:03.235746     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wm46d_kubernetes-dashboard(399b73af-1776-4973-905e-d26f180167cb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wm46d" podUID="399b73af-1776-4973-905e-d26f180167cb"
	Sep 04 07:00:10 no-preload-574576 kubelet[802]: E0904 07:00:10.314536     802 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756969210314287090  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 04 07:00:10 no-preload-574576 kubelet[802]: E0904 07:00:10.314585     802 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756969210314287090  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 04 07:00:12 no-preload-574576 kubelet[802]: E0904 07:00:12.237436     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rf2hg" podUID="0a81ba81-116f-4a44-ab32-2b3c88744009"
	Sep 04 07:00:13 no-preload-574576 kubelet[802]: E0904 07:00:13.236412     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-7qmkr" podUID="14f3f7b5-1a03-4bc5-b95b-0a35a2a86978"
	Sep 04 07:00:15 no-preload-574576 kubelet[802]: I0904 07:00:15.235559     802 scope.go:117] "RemoveContainer" containerID="de7c78dcffd6a9c8bc297283c91f7daa18abc2f4f9e00769748a08ae41e1dffe"
	Sep 04 07:00:15 no-preload-574576 kubelet[802]: E0904 07:00:15.235753     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wm46d_kubernetes-dashboard(399b73af-1776-4973-905e-d26f180167cb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wm46d" podUID="399b73af-1776-4973-905e-d26f180167cb"
	Sep 04 07:00:20 no-preload-574576 kubelet[802]: E0904 07:00:20.316104     802 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756969220315826799  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 04 07:00:20 no-preload-574576 kubelet[802]: E0904 07:00:20.316145     802 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756969220315826799  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 04 07:00:25 no-preload-574576 kubelet[802]: E0904 07:00:25.236755     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rf2hg" podUID="0a81ba81-116f-4a44-ab32-2b3c88744009"
	Sep 04 07:00:28 no-preload-574576 kubelet[802]: I0904 07:00:28.235731     802 scope.go:117] "RemoveContainer" containerID="de7c78dcffd6a9c8bc297283c91f7daa18abc2f4f9e00769748a08ae41e1dffe"
	Sep 04 07:00:28 no-preload-574576 kubelet[802]: E0904 07:00:28.235969     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wm46d_kubernetes-dashboard(399b73af-1776-4973-905e-d26f180167cb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wm46d" podUID="399b73af-1776-4973-905e-d26f180167cb"
	Sep 04 07:00:28 no-preload-574576 kubelet[802]: E0904 07:00:28.236691     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-7qmkr" podUID="14f3f7b5-1a03-4bc5-b95b-0a35a2a86978"
	Sep 04 07:00:30 no-preload-574576 kubelet[802]: E0904 07:00:30.317279     802 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756969230317021363  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 04 07:00:30 no-preload-574576 kubelet[802]: E0904 07:00:30.317326     802 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756969230317021363  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 04 07:00:38 no-preload-574576 kubelet[802]: E0904 07:00:38.237417     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rf2hg" podUID="0a81ba81-116f-4a44-ab32-2b3c88744009"
	Sep 04 07:00:40 no-preload-574576 kubelet[802]: E0904 07:00:40.318938     802 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756969240318668076  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 04 07:00:40 no-preload-574576 kubelet[802]: E0904 07:00:40.318979     802 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756969240318668076  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 04 07:00:41 no-preload-574576 kubelet[802]: E0904 07:00:41.237351     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-7qmkr" podUID="14f3f7b5-1a03-4bc5-b95b-0a35a2a86978"
	Sep 04 07:00:42 no-preload-574576 kubelet[802]: I0904 07:00:42.237355     802 scope.go:117] "RemoveContainer" containerID="de7c78dcffd6a9c8bc297283c91f7daa18abc2f4f9e00769748a08ae41e1dffe"
	Sep 04 07:00:42 no-preload-574576 kubelet[802]: E0904 07:00:42.237564     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wm46d_kubernetes-dashboard(399b73af-1776-4973-905e-d26f180167cb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wm46d" podUID="399b73af-1776-4973-905e-d26f180167cb"
	Sep 04 07:00:50 no-preload-574576 kubelet[802]: E0904 07:00:50.320313     802 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756969250320043200  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 04 07:00:50 no-preload-574576 kubelet[802]: E0904 07:00:50.320356     802 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756969250320043200  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 04 07:00:52 no-preload-574576 kubelet[802]: E0904 07:00:52.236543     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rf2hg" podUID="0a81ba81-116f-4a44-ab32-2b3c88744009"
	Sep 04 07:00:53 no-preload-574576 kubelet[802]: E0904 07:00:53.236432     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-7qmkr" podUID="14f3f7b5-1a03-4bc5-b95b-0a35a2a86978"
	
	
	==> storage-provisioner [55592e1198d594770403fcc20e6174ff3e1f124050a8d46f6a49c878245932fe] <==
	I0904 06:51:16.018726       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0904 06:51:46.021639       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [c8136e0896839ed9725ac3a18e4cdc34fca2f12d8852b78fa7c810b6e5e09950] <==
	W0904 07:00:28.022868       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:00:30.025956       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:00:30.030025       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:00:32.032717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:00:32.037589       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:00:34.041195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:00:34.044881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:00:36.047512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:00:36.051936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:00:38.056082       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:00:38.060684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:00:40.064599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:00:40.069869       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:00:42.072890       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:00:42.077067       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:00:44.080065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:00:44.083681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:00:46.087042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:00:46.090719       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:00:48.093553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:00:48.099071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:00:50.101971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:00:50.107120       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:00:52.110832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:00:52.117611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-574576 -n no-preload-574576
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-574576 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-7qmkr kubernetes-dashboard-855c9754f9-rf2hg
helpers_test.go:282: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context no-preload-574576 describe pod metrics-server-746fcd58dc-7qmkr kubernetes-dashboard-855c9754f9-rf2hg
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context no-preload-574576 describe pod metrics-server-746fcd58dc-7qmkr kubernetes-dashboard-855c9754f9-rf2hg: exit status 1 (57.356506ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-7qmkr" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-rf2hg" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context no-preload-574576 describe pod metrics-server-746fcd58dc-7qmkr kubernetes-dashboard-855c9754f9-rf2hg: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-wlwcq" [ddf273f4-7295-4b47-a1af-b2f7c30d2f94] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0904 06:54:31.715495 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/functional-856205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-589812 -n embed-certs-589812
start_stop_delete_test.go:272: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-09-04 07:03:26.835941777 +0000 UTC m=+3797.624972410
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context embed-certs-589812 describe po kubernetes-dashboard-855c9754f9-wlwcq -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context embed-certs-589812 describe po kubernetes-dashboard-855c9754f9-wlwcq -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-wlwcq
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             embed-certs-589812/192.168.94.2
Start Time:       Thu, 04 Sep 2025 06:53:55 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-trx94 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-trx94:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  9m31s                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wlwcq to embed-certs-589812
Normal   Pulling    4m25s (x5 over 9m31s)   kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     3m55s (x5 over 8m57s)   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     3m55s (x5 over 8m57s)   kubelet            Error: ErrImagePull
Warning  Failed     2m50s (x16 over 8m57s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    102s (x21 over 8m57s)   kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context embed-certs-589812 logs kubernetes-dashboard-855c9754f9-wlwcq -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context embed-certs-589812 logs kubernetes-dashboard-855c9754f9-wlwcq -n kubernetes-dashboard: exit status 1 (72.601692ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-wlwcq" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context embed-certs-589812 logs kubernetes-dashboard-855c9754f9-wlwcq -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-589812
helpers_test.go:243: (dbg) docker inspect embed-certs-589812:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0161e12dd5cfc739425ef831f64b45a895667d74adbd7507e445b97684247a2e",
	        "Created": "2025-09-04T06:52:05.721813416Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1795063,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-04T06:53:38.983357181Z",
	            "FinishedAt": "2025-09-04T06:53:38.236293542Z"
	        },
	        "Image": "sha256:6f7d8b3ae805e64eb4efe058a75d43d384fe5989473cee7f8e24ea90eca28309",
	        "ResolvConfPath": "/var/lib/docker/containers/0161e12dd5cfc739425ef831f64b45a895667d74adbd7507e445b97684247a2e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0161e12dd5cfc739425ef831f64b45a895667d74adbd7507e445b97684247a2e/hostname",
	        "HostsPath": "/var/lib/docker/containers/0161e12dd5cfc739425ef831f64b45a895667d74adbd7507e445b97684247a2e/hosts",
	        "LogPath": "/var/lib/docker/containers/0161e12dd5cfc739425ef831f64b45a895667d74adbd7507e445b97684247a2e/0161e12dd5cfc739425ef831f64b45a895667d74adbd7507e445b97684247a2e-json.log",
	        "Name": "/embed-certs-589812",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-589812:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-589812",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0161e12dd5cfc739425ef831f64b45a895667d74adbd7507e445b97684247a2e",
	                "LowerDir": "/var/lib/docker/overlay2/29b9979564cb53163c731acd557f9ccddda8f5bb35afe526647e9462d37422d8-init/diff:/var/lib/docker/overlay2/00af8677cb60c76ca825d07bd2d1267a5f0b2d8d1147a86a8eb7a1b8e0189af8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/29b9979564cb53163c731acd557f9ccddda8f5bb35afe526647e9462d37422d8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/29b9979564cb53163c731acd557f9ccddda8f5bb35afe526647e9462d37422d8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/29b9979564cb53163c731acd557f9ccddda8f5bb35afe526647e9462d37422d8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-589812",
	                "Source": "/var/lib/docker/volumes/embed-certs-589812/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-589812",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-589812",
	                "name.minikube.sigs.k8s.io": "embed-certs-589812",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9a9c0fa1d4d1c4c114abf8ac3fc5d11d53182a2b8f5b8047ce9e4181a59fe1c1",
	            "SandboxKey": "/var/run/docker/netns/9a9c0fa1d4d1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34274"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34275"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34278"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34276"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34277"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-589812": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:b7:ff:9e:ed:42",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "806214837f28a2edc5791a33bea586453455fab44fad177c8aac833d4001dfed",
	                    "EndpointID": "b4cb6b560accbbaebb5aa4fc48ecc4d80bfd0c24aef0e0f38e6f38c4dc5a258f",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-589812",
	                        "0161e12dd5cf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-589812 -n embed-certs-589812
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-589812 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-589812 logs -n 25: (1.178238656s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p no-preload-574576 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                       │ no-preload-574576            │ jenkins │ v1.36.0 │ 04 Sep 25 06:49 UTC │ 04 Sep 25 06:50 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-869290 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-869290       │ jenkins │ v1.36.0 │ 04 Sep 25 06:50 UTC │ 04 Sep 25 06:50 UTC │
	│ stop    │ -p old-k8s-version-869290 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-869290       │ jenkins │ v1.36.0 │ 04 Sep 25 06:50 UTC │ 04 Sep 25 06:50 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-869290 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-869290       │ jenkins │ v1.36.0 │ 04 Sep 25 06:50 UTC │ 04 Sep 25 06:50 UTC │
	│ start   │ -p old-k8s-version-869290 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-869290       │ jenkins │ v1.36.0 │ 04 Sep 25 06:50 UTC │ 04 Sep 25 06:51 UTC │
	│ addons  │ enable metrics-server -p no-preload-574576 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-574576            │ jenkins │ v1.36.0 │ 04 Sep 25 06:50 UTC │ 04 Sep 25 06:50 UTC │
	│ stop    │ -p no-preload-574576 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-574576            │ jenkins │ v1.36.0 │ 04 Sep 25 06:50 UTC │ 04 Sep 25 06:51 UTC │
	│ addons  │ enable dashboard -p no-preload-574576 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-574576            │ jenkins │ v1.36.0 │ 04 Sep 25 06:51 UTC │ 04 Sep 25 06:51 UTC │
	│ start   │ -p no-preload-574576 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                       │ no-preload-574576            │ jenkins │ v1.36.0 │ 04 Sep 25 06:51 UTC │ 04 Sep 25 06:51 UTC │
	│ start   │ -p cert-expiration-620042 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-620042       │ jenkins │ v1.36.0 │ 04 Sep 25 06:51 UTC │ 04 Sep 25 06:51 UTC │
	│ delete  │ -p cert-expiration-620042                                                                                                                                                                                                                     │ cert-expiration-620042       │ jenkins │ v1.36.0 │ 04 Sep 25 06:51 UTC │ 04 Sep 25 06:52 UTC │
	│ start   │ -p embed-certs-589812 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                        │ embed-certs-589812           │ jenkins │ v1.36.0 │ 04 Sep 25 06:52 UTC │ 04 Sep 25 06:53 UTC │
	│ start   │ -p kubernetes-upgrade-892549 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-892549    │ jenkins │ v1.36.0 │ 04 Sep 25 06:52 UTC │                     │
	│ start   │ -p kubernetes-upgrade-892549 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-892549    │ jenkins │ v1.36.0 │ 04 Sep 25 06:52 UTC │ 04 Sep 25 06:52 UTC │
	│ delete  │ -p kubernetes-upgrade-892549                                                                                                                                                                                                                  │ kubernetes-upgrade-892549    │ jenkins │ v1.36.0 │ 04 Sep 25 06:52 UTC │ 04 Sep 25 06:52 UTC │
	│ delete  │ -p disable-driver-mounts-393542                                                                                                                                                                                                               │ disable-driver-mounts-393542 │ jenkins │ v1.36.0 │ 04 Sep 25 06:52 UTC │ 04 Sep 25 06:52 UTC │
	│ start   │ -p default-k8s-diff-port-520775 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-520775 │ jenkins │ v1.36.0 │ 04 Sep 25 06:52 UTC │ 04 Sep 25 06:53 UTC │
	│ addons  │ enable metrics-server -p embed-certs-589812 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-589812           │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:53 UTC │
	│ stop    │ -p embed-certs-589812 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-589812           │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:53 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-520775 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-520775 │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:53 UTC │
	│ stop    │ -p default-k8s-diff-port-520775 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-520775 │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:53 UTC │
	│ addons  │ enable dashboard -p embed-certs-589812 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-589812           │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:53 UTC │
	│ start   │ -p embed-certs-589812 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                        │ embed-certs-589812           │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:54 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-520775 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-520775 │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:53 UTC │
	│ start   │ -p default-k8s-diff-port-520775 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-520775 │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:54 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/04 06:53:49
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 06:53:49.418555 1796928 out.go:360] Setting OutFile to fd 1 ...
	I0904 06:53:49.418725 1796928 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:53:49.418774 1796928 out.go:374] Setting ErrFile to fd 2...
	I0904 06:53:49.418785 1796928 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:53:49.419117 1796928 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1516970/.minikube/bin
	I0904 06:53:49.419985 1796928 out.go:368] Setting JSON to false
	I0904 06:53:49.421632 1796928 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":16579,"bootTime":1756952250,"procs":289,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 06:53:49.421749 1796928 start.go:140] virtualization: kvm guest
	I0904 06:53:49.423972 1796928 out.go:179] * [default-k8s-diff-port-520775] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 06:53:49.425842 1796928 notify.go:220] Checking for updates...
	I0904 06:53:49.425850 1796928 out.go:179]   - MINIKUBE_LOCATION=21409
	I0904 06:53:49.427436 1796928 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 06:53:49.428783 1796928 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-1516970/kubeconfig
	I0904 06:53:49.429989 1796928 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1516970/.minikube
	I0904 06:53:49.431134 1796928 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 06:53:49.432406 1796928 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 06:53:49.434250 1796928 config.go:182] Loaded profile config "default-k8s-diff-port-520775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 06:53:49.435089 1796928 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 06:53:49.462481 1796928 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0904 06:53:49.462577 1796928 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 06:53:49.536244 1796928 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:74 SystemTime:2025-09-04 06:53:49.525128821 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 06:53:49.536390 1796928 docker.go:318] overlay module found
	I0904 06:53:49.539526 1796928 out.go:179] * Using the docker driver based on existing profile
	I0904 06:53:49.540719 1796928 start.go:304] selected driver: docker
	I0904 06:53:49.540734 1796928 start.go:918] validating driver "docker" against &{Name:default-k8s-diff-port-520775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-520775 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountS
tring: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 06:53:49.540822 1796928 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 06:53:49.541681 1796928 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 06:53:49.594566 1796928 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:74 SystemTime:2025-09-04 06:53:49.585030944 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 06:53:49.595064 1796928 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 06:53:49.595111 1796928 cni.go:84] Creating CNI manager for ""
	I0904 06:53:49.595174 1796928 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0904 06:53:49.595223 1796928 start.go:348] cluster config:
	{Name:default-k8s-diff-port-520775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-520775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 06:53:49.597216 1796928 out.go:179] * Starting "default-k8s-diff-port-520775" primary control-plane node in "default-k8s-diff-port-520775" cluster
	I0904 06:53:49.598401 1796928 cache.go:123] Beginning downloading kic base image for docker with crio
	I0904 06:53:49.599526 1796928 out.go:179] * Pulling base image v0.0.47-1756936034-21409 ...
	I0904 06:53:49.604882 1796928 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 06:53:49.604957 1796928 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21409-1516970/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0904 06:53:49.604977 1796928 cache.go:58] Caching tarball of preloaded images
	I0904 06:53:49.604992 1796928 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc in local docker daemon
	I0904 06:53:49.605104 1796928 preload.go:172] Found /home/jenkins/minikube-integration/21409-1516970/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0904 06:53:49.605123 1796928 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0904 06:53:49.605341 1796928 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/config.json ...
	I0904 06:53:49.637613 1796928 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc in local docker daemon, skipping pull
	I0904 06:53:49.637635 1796928 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc exists in daemon, skipping load
	I0904 06:53:49.637647 1796928 cache.go:232] Successfully downloaded all kic artifacts
	I0904 06:53:49.637673 1796928 start.go:360] acquireMachinesLock for default-k8s-diff-port-520775: {Name:mkd2b36988a85f8d5c3a19497a99007da8aadae2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 06:53:49.637729 1796928 start.go:364] duration metric: took 33.006µs to acquireMachinesLock for "default-k8s-diff-port-520775"
	I0904 06:53:49.637749 1796928 start.go:96] Skipping create...Using existing machine configuration
	I0904 06:53:49.637756 1796928 fix.go:54] fixHost starting: 
	I0904 06:53:49.637963 1796928 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-520775 --format={{.State.Status}}
	I0904 06:53:49.656941 1796928 fix.go:112] recreateIfNeeded on default-k8s-diff-port-520775: state=Stopped err=<nil>
	W0904 06:53:49.656986 1796928 fix.go:138] unexpected machine state, will restart: <nil>
	I0904 06:53:49.524554 1794879 node_ready.go:49] node "embed-certs-589812" is "Ready"
	I0904 06:53:49.524655 1794879 node_ready.go:38] duration metric: took 3.407781482s for node "embed-certs-589812" to be "Ready" ...
	I0904 06:53:49.524688 1794879 api_server.go:52] waiting for apiserver process to appear ...
	I0904 06:53:49.524773 1794879 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 06:53:51.714274 1794879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.110482825s)
	I0904 06:53:51.714323 1794879 addons.go:479] Verifying addon metrics-server=true in "embed-certs-589812"
	I0904 06:53:51.714427 1794879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.971633666s)
	I0904 06:53:51.714457 1794879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (4.901617894s)
	I0904 06:53:51.714590 1794879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.702133151s)
	I0904 06:53:51.714600 1794879 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.189780106s)
	I0904 06:53:51.714619 1794879 api_server.go:72] duration metric: took 5.87883589s to wait for apiserver process to appear ...
	I0904 06:53:51.714626 1794879 api_server.go:88] waiting for apiserver healthz status ...
	I0904 06:53:51.714643 1794879 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0904 06:53:51.716342 1794879 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-589812 addons enable metrics-server
	
	I0904 06:53:51.722283 1794879 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 06:53:51.722308 1794879 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 06:53:51.730360 1794879 out.go:179] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	I0904 06:53:51.731942 1794879 addons.go:514] duration metric: took 5.89615636s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I0904 06:53:52.215034 1794879 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0904 06:53:52.219745 1794879 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 06:53:52.219786 1794879 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 06:53:52.715125 1794879 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0904 06:53:52.719686 1794879 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 06:53:52.719714 1794879 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 06:53:53.215303 1794879 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0904 06:53:53.219535 1794879 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0904 06:53:53.220593 1794879 api_server.go:141] control plane version: v1.34.0
	I0904 06:53:53.220626 1794879 api_server.go:131] duration metric: took 1.505992813s to wait for apiserver health ...
	I0904 06:53:53.220641 1794879 system_pods.go:43] waiting for kube-system pods to appear ...
	I0904 06:53:53.224544 1794879 system_pods.go:59] 9 kube-system pods found
	I0904 06:53:53.224588 1794879 system_pods.go:61] "coredns-66bc5c9577-j5gww" [e3612616-edf7-408c-8d20-966c456e4a43] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 06:53:53.224605 1794879 system_pods.go:61] "etcd-embed-certs-589812" [ffde7899-36bf-4837-8a40-30b11624fd40] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0904 06:53:53.224618 1794879 system_pods.go:61] "kindnet-wtgxv" [7570cefc-495d-4c68-83e5-04a04d12775a] Running
	I0904 06:53:53.224628 1794879 system_pods.go:61] "kube-apiserver-embed-certs-589812" [095a13f2-431a-46bd-a6b2-d9f475bd60cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 06:53:53.224640 1794879 system_pods.go:61] "kube-controller-manager-embed-certs-589812" [25e8105c-95a2-4761-a9a6-3e01225cde8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0904 06:53:53.224650 1794879 system_pods.go:61] "kube-proxy-xqvlx" [281c6535-72f3-429b-b4b1-df56cb3de2f5] Running
	I0904 06:53:53.224659 1794879 system_pods.go:61] "kube-scheduler-embed-certs-589812" [dbb61597-bbca-422b-b8b6-45821409cb91] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0904 06:53:53.224682 1794879 system_pods.go:61] "metrics-server-746fcd58dc-prlxr" [58b70501-6011-4b99-80ff-1f9b422ae481] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 06:53:53.224694 1794879 system_pods.go:61] "storage-provisioner" [df8bd0bd-3bd4-461e-b276-edf75af8897e] Running
	I0904 06:53:53.224704 1794879 system_pods.go:74] duration metric: took 4.053609ms to wait for pod list to return data ...
	I0904 06:53:53.224716 1794879 default_sa.go:34] waiting for default service account to be created ...
	I0904 06:53:53.227290 1794879 default_sa.go:45] found service account: "default"
	I0904 06:53:53.227311 1794879 default_sa.go:55] duration metric: took 2.585826ms for default service account to be created ...
	I0904 06:53:53.227319 1794879 system_pods.go:116] waiting for k8s-apps to be running ...
	I0904 06:53:53.230112 1794879 system_pods.go:86] 9 kube-system pods found
	I0904 06:53:53.230142 1794879 system_pods.go:89] "coredns-66bc5c9577-j5gww" [e3612616-edf7-408c-8d20-966c456e4a43] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 06:53:53.230154 1794879 system_pods.go:89] "etcd-embed-certs-589812" [ffde7899-36bf-4837-8a40-30b11624fd40] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0904 06:53:53.230162 1794879 system_pods.go:89] "kindnet-wtgxv" [7570cefc-495d-4c68-83e5-04a04d12775a] Running
	I0904 06:53:53.230172 1794879 system_pods.go:89] "kube-apiserver-embed-certs-589812" [095a13f2-431a-46bd-a6b2-d9f475bd60cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 06:53:53.230180 1794879 system_pods.go:89] "kube-controller-manager-embed-certs-589812" [25e8105c-95a2-4761-a9a6-3e01225cde8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0904 06:53:53.230191 1794879 system_pods.go:89] "kube-proxy-xqvlx" [281c6535-72f3-429b-b4b1-df56cb3de2f5] Running
	I0904 06:53:53.230201 1794879 system_pods.go:89] "kube-scheduler-embed-certs-589812" [dbb61597-bbca-422b-b8b6-45821409cb91] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0904 06:53:53.230212 1794879 system_pods.go:89] "metrics-server-746fcd58dc-prlxr" [58b70501-6011-4b99-80ff-1f9b422ae481] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 06:53:53.230218 1794879 system_pods.go:89] "storage-provisioner" [df8bd0bd-3bd4-461e-b276-edf75af8897e] Running
	I0904 06:53:53.230227 1794879 system_pods.go:126] duration metric: took 2.90283ms to wait for k8s-apps to be running ...
	I0904 06:53:53.230240 1794879 system_svc.go:44] waiting for kubelet service to be running ....
	I0904 06:53:53.230287 1794879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 06:53:53.241829 1794879 system_svc.go:56] duration metric: took 11.584133ms WaitForService to wait for kubelet
	I0904 06:53:53.241853 1794879 kubeadm.go:578] duration metric: took 7.406070053s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 06:53:53.241869 1794879 node_conditions.go:102] verifying NodePressure condition ...
	I0904 06:53:53.244406 1794879 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0904 06:53:53.244445 1794879 node_conditions.go:123] node cpu capacity is 8
	I0904 06:53:53.244459 1794879 node_conditions.go:105] duration metric: took 2.584951ms to run NodePressure ...
	I0904 06:53:53.244478 1794879 start.go:241] waiting for startup goroutines ...
	I0904 06:53:53.244492 1794879 start.go:246] waiting for cluster config update ...
	I0904 06:53:53.244509 1794879 start.go:255] writing updated cluster config ...
	I0904 06:53:53.244784 1794879 ssh_runner.go:195] Run: rm -f paused
	I0904 06:53:53.248131 1794879 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 06:53:53.251511 1794879 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-j5gww" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:53:49.659280 1796928 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-520775" ...
	I0904 06:53:49.659366 1796928 cli_runner.go:164] Run: docker start default-k8s-diff-port-520775
	I0904 06:53:49.944765 1796928 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-520775 --format={{.State.Status}}
	I0904 06:53:49.965484 1796928 kic.go:430] container "default-k8s-diff-port-520775" state is running.
	I0904 06:53:49.965966 1796928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-520775
	I0904 06:53:49.984536 1796928 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/config.json ...
	I0904 06:53:49.984754 1796928 machine.go:93] provisionDockerMachine start ...
	I0904 06:53:49.984828 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:50.006739 1796928 main.go:141] libmachine: Using SSH client type: native
	I0904 06:53:50.007122 1796928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 34279 <nil> <nil>}
	I0904 06:53:50.007149 1796928 main.go:141] libmachine: About to run SSH command:
	hostname
	I0904 06:53:50.011282 1796928 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0904 06:53:53.135459 1796928 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-520775
	
	I0904 06:53:53.135490 1796928 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-520775"
	I0904 06:53:53.135560 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:53.153046 1796928 main.go:141] libmachine: Using SSH client type: native
	I0904 06:53:53.153307 1796928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 34279 <nil> <nil>}
	I0904 06:53:53.153323 1796928 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-520775 && echo "default-k8s-diff-port-520775" | sudo tee /etc/hostname
	I0904 06:53:53.284177 1796928 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-520775
	
	I0904 06:53:53.284278 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:53.302854 1796928 main.go:141] libmachine: Using SSH client type: native
	I0904 06:53:53.303062 1796928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 34279 <nil> <nil>}
	I0904 06:53:53.303082 1796928 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-520775' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-520775/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-520775' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0904 06:53:53.428269 1796928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 06:53:53.428306 1796928 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-1516970/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-1516970/.minikube}
	I0904 06:53:53.428357 1796928 ubuntu.go:190] setting up certificates
	I0904 06:53:53.428381 1796928 provision.go:84] configureAuth start
	I0904 06:53:53.428449 1796928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-520775
	I0904 06:53:53.447935 1796928 provision.go:143] copyHostCerts
	I0904 06:53:53.448036 1796928 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.pem, removing ...
	I0904 06:53:53.448051 1796928 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.pem
	I0904 06:53:53.448113 1796928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.pem (1082 bytes)
	I0904 06:53:53.448215 1796928 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1516970/.minikube/cert.pem, removing ...
	I0904 06:53:53.448223 1796928 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1516970/.minikube/cert.pem
	I0904 06:53:53.448247 1796928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-1516970/.minikube/cert.pem (1123 bytes)
	I0904 06:53:53.448320 1796928 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1516970/.minikube/key.pem, removing ...
	I0904 06:53:53.448326 1796928 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1516970/.minikube/key.pem
	I0904 06:53:53.448347 1796928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-1516970/.minikube/key.pem (1675 bytes)
	I0904 06:53:53.448409 1796928 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-520775 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-520775 localhost minikube]
	I0904 06:53:53.540900 1796928 provision.go:177] copyRemoteCerts
	I0904 06:53:53.540966 1796928 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0904 06:53:53.541003 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:53.558727 1796928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/default-k8s-diff-port-520775/id_rsa Username:docker}
	I0904 06:53:53.650335 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0904 06:53:53.677813 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0904 06:53:53.700987 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0904 06:53:53.724318 1796928 provision.go:87] duration metric: took 295.918548ms to configureAuth
	I0904 06:53:53.724345 1796928 ubuntu.go:206] setting minikube options for container-runtime
	I0904 06:53:53.724529 1796928 config.go:182] Loaded profile config "default-k8s-diff-port-520775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 06:53:53.724626 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:53.743241 1796928 main.go:141] libmachine: Using SSH client type: native
	I0904 06:53:53.743467 1796928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 34279 <nil> <nil>}
	I0904 06:53:53.743488 1796928 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0904 06:53:54.045106 1796928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0904 06:53:54.045134 1796928 machine.go:96] duration metric: took 4.060362432s to provisionDockerMachine
	I0904 06:53:54.045148 1796928 start.go:293] postStartSetup for "default-k8s-diff-port-520775" (driver="docker")
	I0904 06:53:54.045187 1796928 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0904 06:53:54.045256 1796928 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0904 06:53:54.045307 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:54.064198 1796928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/default-k8s-diff-port-520775/id_rsa Username:docker}
	I0904 06:53:54.152873 1796928 ssh_runner.go:195] Run: cat /etc/os-release
	I0904 06:53:54.156293 1796928 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0904 06:53:54.156319 1796928 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0904 06:53:54.156326 1796928 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0904 06:53:54.156333 1796928 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0904 06:53:54.156345 1796928 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1516970/.minikube/addons for local assets ...
	I0904 06:53:54.156399 1796928 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1516970/.minikube/files for local assets ...
	I0904 06:53:54.156481 1796928 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-1516970/.minikube/files/etc/ssl/certs/15207162.pem -> 15207162.pem in /etc/ssl/certs
	I0904 06:53:54.156610 1796928 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0904 06:53:54.165073 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/files/etc/ssl/certs/15207162.pem --> /etc/ssl/certs/15207162.pem (1708 bytes)
	I0904 06:53:54.187780 1796928 start.go:296] duration metric: took 142.614938ms for postStartSetup
	I0904 06:53:54.187887 1796928 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 06:53:54.187937 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:54.205683 1796928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/default-k8s-diff-port-520775/id_rsa Username:docker}
	I0904 06:53:54.292859 1796928 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0904 06:53:54.297265 1796928 fix.go:56] duration metric: took 4.65950064s for fixHost
	I0904 06:53:54.297289 1796928 start.go:83] releasing machines lock for "default-k8s-diff-port-520775", held for 4.659549727s
	I0904 06:53:54.297358 1796928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-520775
	I0904 06:53:54.315327 1796928 ssh_runner.go:195] Run: cat /version.json
	I0904 06:53:54.315393 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:54.315420 1796928 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0904 06:53:54.315484 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:54.335338 1796928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/default-k8s-diff-port-520775/id_rsa Username:docker}
	I0904 06:53:54.336109 1796928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/default-k8s-diff-port-520775/id_rsa Username:docker}
	I0904 06:53:54.493584 1796928 ssh_runner.go:195] Run: systemctl --version
	I0904 06:53:54.498345 1796928 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0904 06:53:54.638467 1796928 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0904 06:53:54.642924 1796928 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 06:53:54.652284 1796928 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0904 06:53:54.652347 1796928 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 06:53:54.660849 1796928 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0904 06:53:54.660875 1796928 start.go:495] detecting cgroup driver to use...
	I0904 06:53:54.660913 1796928 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0904 06:53:54.660966 1796928 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0904 06:53:54.672418 1796928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 06:53:54.683134 1796928 docker.go:218] disabling cri-docker service (if available) ...
	I0904 06:53:54.683181 1796928 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0904 06:53:54.695400 1796928 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0904 06:53:54.706646 1796928 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0904 06:53:54.793740 1796928 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0904 06:53:54.873854 1796928 docker.go:234] disabling docker service ...
	I0904 06:53:54.873933 1796928 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0904 06:53:54.885885 1796928 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0904 06:53:54.896737 1796928 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0904 06:53:54.980788 1796928 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0904 06:53:55.057730 1796928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0904 06:53:55.068310 1796928 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 06:53:55.083683 1796928 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0904 06:53:55.083736 1796928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:53:55.093158 1796928 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0904 06:53:55.093215 1796928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:53:55.102672 1796928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:53:55.113082 1796928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:53:55.122399 1796928 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0904 06:53:55.131334 1796928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:53:55.140602 1796928 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:53:55.150009 1796928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:53:55.159908 1796928 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0904 06:53:55.167649 1796928 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0904 06:53:55.175680 1796928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 06:53:55.254239 1796928 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0904 06:53:55.362926 1796928 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0904 06:53:55.363001 1796928 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0904 06:53:55.366648 1796928 start.go:563] Will wait 60s for crictl version
	I0904 06:53:55.366695 1796928 ssh_runner.go:195] Run: which crictl
	I0904 06:53:55.369962 1796928 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0904 06:53:55.403453 1796928 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0904 06:53:55.403538 1796928 ssh_runner.go:195] Run: crio --version
	I0904 06:53:55.441474 1796928 ssh_runner.go:195] Run: crio --version
	I0904 06:53:55.479608 1796928 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0904 06:53:55.480915 1796928 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-520775 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0904 06:53:55.497935 1796928 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0904 06:53:55.502150 1796928 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 06:53:55.514295 1796928 kubeadm.go:875] updating cluster {Name:default-k8s-diff-port-520775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-520775 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0904 06:53:55.514485 1796928 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 06:53:55.514556 1796928 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 06:53:55.564218 1796928 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 06:53:55.564245 1796928 crio.go:433] Images already preloaded, skipping extraction
	I0904 06:53:55.564292 1796928 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 06:53:55.602409 1796928 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 06:53:55.602436 1796928 cache_images.go:85] Images are preloaded, skipping loading
	I0904 06:53:55.602446 1796928 kubeadm.go:926] updating node { 192.168.103.2 8444 v1.34.0 crio true true} ...
	I0904 06:53:55.602577 1796928 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-520775 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-520775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0904 06:53:55.602645 1796928 ssh_runner.go:195] Run: crio config
	I0904 06:53:55.664543 1796928 cni.go:84] Creating CNI manager for ""
	I0904 06:53:55.664570 1796928 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0904 06:53:55.664584 1796928 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0904 06:53:55.664612 1796928 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-520775 NodeName:default-k8s-diff-port-520775 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0904 06:53:55.664768 1796928 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-520775"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0904 06:53:55.664845 1796928 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0904 06:53:55.673590 1796928 binaries.go:44] Found k8s binaries, skipping transfer
	I0904 06:53:55.673661 1796928 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0904 06:53:55.682016 1796928 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0904 06:53:55.699448 1796928 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0904 06:53:55.717472 1796928 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I0904 06:53:55.734579 1796928 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0904 06:53:55.737941 1796928 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 06:53:55.748899 1796928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 06:53:55.834506 1796928 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 06:53:55.848002 1796928 certs.go:68] Setting up /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775 for IP: 192.168.103.2
	I0904 06:53:55.848028 1796928 certs.go:194] generating shared ca certs ...
	I0904 06:53:55.848048 1796928 certs.go:226] acquiring lock for ca certs: {Name:mk2d06825c36f44304767b415a9a93c84edb2667 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:53:55.848186 1796928 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.key
	I0904 06:53:55.848228 1796928 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/proxy-client-ca.key
	I0904 06:53:55.848237 1796928 certs.go:256] generating profile certs ...
	I0904 06:53:55.848310 1796928 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/client.key
	I0904 06:53:55.848365 1796928 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/apiserver.key.6ec15110
	I0904 06:53:55.848406 1796928 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/proxy-client.key
	I0904 06:53:55.848517 1796928 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/1520716.pem (1338 bytes)
	W0904 06:53:55.848547 1796928 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/1520716_empty.pem, impossibly tiny 0 bytes
	I0904 06:53:55.848556 1796928 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca-key.pem (1675 bytes)
	I0904 06:53:55.848578 1796928 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca.pem (1082 bytes)
	I0904 06:53:55.848601 1796928 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/cert.pem (1123 bytes)
	I0904 06:53:55.848627 1796928 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/key.pem (1675 bytes)
	I0904 06:53:55.848669 1796928 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/files/etc/ssl/certs/15207162.pem (1708 bytes)
	I0904 06:53:55.849251 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0904 06:53:55.876639 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0904 06:53:55.904012 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0904 06:53:55.936371 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0904 06:53:56.018233 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0904 06:53:56.041340 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0904 06:53:56.065911 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0904 06:53:56.089737 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0904 06:53:56.112935 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/files/etc/ssl/certs/15207162.pem --> /usr/share/ca-certificates/15207162.pem (1708 bytes)
	I0904 06:53:56.138060 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0904 06:53:56.162385 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/1520716.pem --> /usr/share/ca-certificates/1520716.pem (1338 bytes)
	I0904 06:53:56.185546 1796928 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0904 06:53:56.202891 1796928 ssh_runner.go:195] Run: openssl version
	I0904 06:53:56.208611 1796928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15207162.pem && ln -fs /usr/share/ca-certificates/15207162.pem /etc/ssl/certs/15207162.pem"
	I0904 06:53:56.219865 1796928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15207162.pem
	I0904 06:53:56.223785 1796928 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  4 06:07 /usr/share/ca-certificates/15207162.pem
	I0904 06:53:56.223867 1796928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15207162.pem
	I0904 06:53:56.231657 1796928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15207162.pem /etc/ssl/certs/3ec20f2e.0"
	I0904 06:53:56.243527 1796928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0904 06:53:56.253334 1796928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0904 06:53:56.257449 1796928 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  4 06:00 /usr/share/ca-certificates/minikubeCA.pem
	I0904 06:53:56.257517 1796928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0904 06:53:56.264253 1796928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0904 06:53:56.273629 1796928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1520716.pem && ln -fs /usr/share/ca-certificates/1520716.pem /etc/ssl/certs/1520716.pem"
	I0904 06:53:56.283120 1796928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1520716.pem
	I0904 06:53:56.286378 1796928 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  4 06:07 /usr/share/ca-certificates/1520716.pem
	I0904 06:53:56.286450 1796928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1520716.pem
	I0904 06:53:56.293207 1796928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1520716.pem /etc/ssl/certs/51391683.0"
	I0904 06:53:56.301668 1796928 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0904 06:53:56.308006 1796928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0904 06:53:56.315155 1796928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0904 06:53:56.322059 1796928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0904 06:53:56.329568 1796928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0904 06:53:56.337737 1796928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0904 06:53:56.345511 1796928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0904 06:53:56.353351 1796928 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-520775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-520775 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 06:53:56.353482 1796928 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0904 06:53:56.353539 1796928 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0904 06:53:56.397941 1796928 cri.go:89] found id: ""
	I0904 06:53:56.398012 1796928 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0904 06:53:56.408886 1796928 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0904 06:53:56.408981 1796928 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0904 06:53:56.409041 1796928 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0904 06:53:56.424530 1796928 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0904 06:53:56.425727 1796928 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-520775" does not appear in /home/jenkins/minikube-integration/21409-1516970/kubeconfig
	I0904 06:53:56.426580 1796928 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-1516970/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-520775" cluster setting kubeconfig missing "default-k8s-diff-port-520775" context setting]
	I0904 06:53:56.427949 1796928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1516970/kubeconfig: {Name:mkbb6a6dae4a65ddd44b276a5562dc0a264116a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:53:56.430031 1796928 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0904 06:53:56.444430 1796928 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.103.2
	I0904 06:53:56.444470 1796928 kubeadm.go:593] duration metric: took 35.478353ms to restartPrimaryControlPlane
	I0904 06:53:56.444481 1796928 kubeadm.go:394] duration metric: took 91.143305ms to StartCluster
	I0904 06:53:56.444503 1796928 settings.go:142] acquiring lock: {Name:mk2d1c8a569b62879275d6daa2b799b595d6bca2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:53:56.444560 1796928 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-1516970/kubeconfig
	I0904 06:53:56.447245 1796928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1516970/kubeconfig: {Name:mkbb6a6dae4a65ddd44b276a5562dc0a264116a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:53:56.447495 1796928 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0904 06:53:56.447711 1796928 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0904 06:53:56.447836 1796928 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-520775"
	I0904 06:53:56.447860 1796928 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-520775"
	W0904 06:53:56.447868 1796928 addons.go:247] addon storage-provisioner should already be in state true
	I0904 06:53:56.447888 1796928 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-520775"
	I0904 06:53:56.447903 1796928 host.go:66] Checking if "default-k8s-diff-port-520775" exists ...
	I0904 06:53:56.447928 1796928 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-520775"
	I0904 06:53:56.447921 1796928 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-520775"
	I0904 06:53:56.447939 1796928 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-520775"
	I0904 06:53:56.447970 1796928 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-520775"
	I0904 06:53:56.447970 1796928 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-520775"
	W0904 06:53:56.447979 1796928 addons.go:247] addon dashboard should already be in state true
	I0904 06:53:56.447980 1796928 config.go:182] Loaded profile config "default-k8s-diff-port-520775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	W0904 06:53:56.447982 1796928 addons.go:247] addon metrics-server should already be in state true
	I0904 06:53:56.448017 1796928 host.go:66] Checking if "default-k8s-diff-port-520775" exists ...
	I0904 06:53:56.448020 1796928 host.go:66] Checking if "default-k8s-diff-port-520775" exists ...
	I0904 06:53:56.448276 1796928 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-520775 --format={{.State.Status}}
	I0904 06:53:56.448431 1796928 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-520775 --format={{.State.Status}}
	I0904 06:53:56.448473 1796928 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-520775 --format={{.State.Status}}
	I0904 06:53:56.448520 1796928 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-520775 --format={{.State.Status}}
	I0904 06:53:56.450093 1796928 out.go:179] * Verifying Kubernetes components...
	I0904 06:53:56.451389 1796928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 06:53:56.482390 1796928 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-520775"
	W0904 06:53:56.482412 1796928 addons.go:247] addon default-storageclass should already be in state true
	I0904 06:53:56.482437 1796928 host.go:66] Checking if "default-k8s-diff-port-520775" exists ...
	I0904 06:53:56.482730 1796928 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-520775 --format={{.State.Status}}
	I0904 06:53:56.485071 1796928 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 06:53:56.485089 1796928 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0904 06:53:56.488270 1796928 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 06:53:56.488294 1796928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0904 06:53:56.488355 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:56.490382 1796928 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0904 06:53:56.491521 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0904 06:53:56.491536 1796928 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0904 06:53:56.491584 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:56.496773 1796928 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	W0904 06:53:55.257485 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:53:57.757496 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	I0904 06:53:56.497920 1796928 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0904 06:53:56.497941 1796928 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0904 06:53:56.498005 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:56.511983 1796928 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0904 06:53:56.512010 1796928 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0904 06:53:56.512072 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:56.529596 1796928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/default-k8s-diff-port-520775/id_rsa Username:docker}
	I0904 06:53:56.531423 1796928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/default-k8s-diff-port-520775/id_rsa Username:docker}
	I0904 06:53:56.543761 1796928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/default-k8s-diff-port-520775/id_rsa Username:docker}
	I0904 06:53:56.547939 1796928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/default-k8s-diff-port-520775/id_rsa Username:docker}
	I0904 06:53:56.815518 1796928 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 06:53:56.824564 1796928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 06:53:56.900475 1796928 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-520775" to be "Ready" ...
	I0904 06:53:56.903122 1796928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0904 06:53:56.915401 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0904 06:53:56.915439 1796928 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0904 06:53:57.011674 1796928 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0904 06:53:57.011705 1796928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0904 06:53:57.025890 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0904 06:53:57.025929 1796928 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0904 06:53:57.130640 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0904 06:53:57.130669 1796928 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0904 06:53:57.201935 1796928 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0904 06:53:57.201971 1796928 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	W0904 06:53:57.228446 1796928 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 06:53:57.228496 1796928 retry.go:31] will retry after 331.542893ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0904 06:53:57.228576 1796928 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 06:53:57.228595 1796928 retry.go:31] will retry after 234.661911ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 06:53:57.233201 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0904 06:53:57.233235 1796928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0904 06:53:57.312449 1796928 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0904 06:53:57.312483 1796928 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0904 06:53:57.335196 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0904 06:53:57.335296 1796928 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0904 06:53:57.340794 1796928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0904 06:53:57.423747 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0904 06:53:57.423855 1796928 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0904 06:53:57.464378 1796928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0904 06:53:57.517739 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0904 06:53:57.517836 1796928 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0904 06:53:57.560380 1796928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 06:53:57.621494 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0904 06:53:57.621580 1796928 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0904 06:53:57.719817 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0904 06:53:57.719851 1796928 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0904 06:53:57.808921 1796928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0904 06:54:00.222294 1796928 node_ready.go:49] node "default-k8s-diff-port-520775" is "Ready"
	I0904 06:54:00.222393 1796928 node_ready.go:38] duration metric: took 3.321861305s for node "default-k8s-diff-port-520775" to be "Ready" ...
	I0904 06:54:00.222414 1796928 api_server.go:52] waiting for apiserver process to appear ...
	I0904 06:54:00.222514 1796928 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 06:54:02.420531 1796928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.07964965s)
	I0904 06:54:02.420574 1796928 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-520775"
	I0904 06:54:02.420586 1796928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (4.956118872s)
	I0904 06:54:02.420682 1796928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.860244874s)
	I0904 06:54:02.420925 1796928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.611964012s)
	I0904 06:54:02.420956 1796928 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.198413181s)
	I0904 06:54:02.421147 1796928 api_server.go:72] duration metric: took 5.973615373s to wait for apiserver process to appear ...
	I0904 06:54:02.421161 1796928 api_server.go:88] waiting for apiserver healthz status ...
	I0904 06:54:02.421181 1796928 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I0904 06:54:02.422911 1796928 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-520775 addons enable metrics-server
	
	I0904 06:54:02.426397 1796928 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 06:54:02.426463 1796928 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 06:54:02.428576 1796928 out.go:179] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	W0904 06:53:59.759069 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:54:02.258100 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	I0904 06:54:02.429861 1796928 addons.go:514] duration metric: took 5.982154586s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I0904 06:54:02.921448 1796928 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I0904 06:54:02.926218 1796928 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 06:54:02.926239 1796928 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 06:54:03.421924 1796928 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I0904 06:54:03.427035 1796928 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I0904 06:54:03.428103 1796928 api_server.go:141] control plane version: v1.34.0
	I0904 06:54:03.428127 1796928 api_server.go:131] duration metric: took 1.006959868s to wait for apiserver health ...
	I0904 06:54:03.428136 1796928 system_pods.go:43] waiting for kube-system pods to appear ...
	I0904 06:54:03.434471 1796928 system_pods.go:59] 9 kube-system pods found
	I0904 06:54:03.434508 1796928 system_pods.go:61] "coredns-66bc5c9577-hm47q" [e73fad8a-ad1b-475f-a4ea-bfda49587ae8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 06:54:03.434519 1796928 system_pods.go:61] "etcd-default-k8s-diff-port-520775" [5829ac4b-ff8b-4d46-9be9-0947be850651] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0904 06:54:03.434525 1796928 system_pods.go:61] "kindnet-wz7lg" [8e231614-2126-4bd8-b77d-a4e98bfbcd0b] Running
	I0904 06:54:03.434533 1796928 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-520775" [95d6a6b9-81f2-48b3-8343-289600b99b2a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 06:54:03.434544 1796928 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-520775" [69053048-8fce-4b4b-8df8-a8f7415bf602] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0904 06:54:03.434564 1796928 system_pods.go:61] "kube-proxy-zrlrh" [df5878ee-bf16-4a99-894c-1f83484bbc3b] Running
	I0904 06:54:03.434573 1796928 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-520775" [e52ed283-6545-4336-8d7a-e26c18f54b4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0904 06:54:03.434586 1796928 system_pods.go:61] "metrics-server-746fcd58dc-gws8j" [16bf9326-2429-4d6b-a6ed-6dc44262c35e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 06:54:03.434594 1796928 system_pods.go:61] "storage-provisioner" [0f88021c-f0ad-4130-8cb1-06f073f45244] Running
	I0904 06:54:03.434602 1796928 system_pods.go:74] duration metric: took 6.460113ms to wait for pod list to return data ...
	I0904 06:54:03.434614 1796928 default_sa.go:34] waiting for default service account to be created ...
	I0904 06:54:03.437095 1796928 default_sa.go:45] found service account: "default"
	I0904 06:54:03.437116 1796928 default_sa.go:55] duration metric: took 2.49678ms for default service account to be created ...
	I0904 06:54:03.437124 1796928 system_pods.go:116] waiting for k8s-apps to be running ...
	I0904 06:54:03.439954 1796928 system_pods.go:86] 9 kube-system pods found
	I0904 06:54:03.439997 1796928 system_pods.go:89] "coredns-66bc5c9577-hm47q" [e73fad8a-ad1b-475f-a4ea-bfda49587ae8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 06:54:03.440010 1796928 system_pods.go:89] "etcd-default-k8s-diff-port-520775" [5829ac4b-ff8b-4d46-9be9-0947be850651] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0904 06:54:03.440018 1796928 system_pods.go:89] "kindnet-wz7lg" [8e231614-2126-4bd8-b77d-a4e98bfbcd0b] Running
	I0904 06:54:03.440029 1796928 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-520775" [95d6a6b9-81f2-48b3-8343-289600b99b2a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 06:54:03.440043 1796928 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-520775" [69053048-8fce-4b4b-8df8-a8f7415bf602] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0904 06:54:03.440053 1796928 system_pods.go:89] "kube-proxy-zrlrh" [df5878ee-bf16-4a99-894c-1f83484bbc3b] Running
	I0904 06:54:03.440060 1796928 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-520775" [e52ed283-6545-4336-8d7a-e26c18f54b4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0904 06:54:03.440072 1796928 system_pods.go:89] "metrics-server-746fcd58dc-gws8j" [16bf9326-2429-4d6b-a6ed-6dc44262c35e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 06:54:03.440078 1796928 system_pods.go:89] "storage-provisioner" [0f88021c-f0ad-4130-8cb1-06f073f45244] Running
	I0904 06:54:03.440085 1796928 system_pods.go:126] duration metric: took 2.955ms to wait for k8s-apps to be running ...
	I0904 06:54:03.440100 1796928 system_svc.go:44] waiting for kubelet service to be running ....
	I0904 06:54:03.440162 1796928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 06:54:03.451705 1796928 system_svc.go:56] duration metric: took 11.594555ms WaitForService to wait for kubelet
	I0904 06:54:03.451731 1796928 kubeadm.go:578] duration metric: took 7.004201759s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 06:54:03.451748 1796928 node_conditions.go:102] verifying NodePressure condition ...
	I0904 06:54:03.455005 1796928 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0904 06:54:03.455036 1796928 node_conditions.go:123] node cpu capacity is 8
	I0904 06:54:03.455062 1796928 node_conditions.go:105] duration metric: took 3.308068ms to run NodePressure ...
	I0904 06:54:03.455079 1796928 start.go:241] waiting for startup goroutines ...
	I0904 06:54:03.455095 1796928 start.go:246] waiting for cluster config update ...
	I0904 06:54:03.455112 1796928 start.go:255] writing updated cluster config ...
	I0904 06:54:03.455408 1796928 ssh_runner.go:195] Run: rm -f paused
	I0904 06:54:03.458944 1796928 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 06:54:03.462665 1796928 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hm47q" in "kube-system" namespace to be "Ready" or be gone ...
	W0904 06:54:04.757792 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:54:07.257591 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:54:05.468478 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:07.500893 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:09.756895 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:54:12.257352 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:54:09.968652 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:12.468012 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:14.756854 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:54:17.256905 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:54:14.468746 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:16.967726 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:18.968373 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:19.257325 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:54:21.757694 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:54:20.968633 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:23.467871 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:24.256489 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	I0904 06:54:24.756710 1794879 pod_ready.go:94] pod "coredns-66bc5c9577-j5gww" is "Ready"
	I0904 06:54:24.756744 1794879 pod_ready.go:86] duration metric: took 31.505206553s for pod "coredns-66bc5c9577-j5gww" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:24.759357 1794879 pod_ready.go:83] waiting for pod "etcd-embed-certs-589812" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:24.763174 1794879 pod_ready.go:94] pod "etcd-embed-certs-589812" is "Ready"
	I0904 06:54:24.763194 1794879 pod_ready.go:86] duration metric: took 3.815458ms for pod "etcd-embed-certs-589812" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:24.765056 1794879 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-589812" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:24.768709 1794879 pod_ready.go:94] pod "kube-apiserver-embed-certs-589812" is "Ready"
	I0904 06:54:24.768729 1794879 pod_ready.go:86] duration metric: took 3.655905ms for pod "kube-apiserver-embed-certs-589812" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:24.770312 1794879 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-589812" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:24.955369 1794879 pod_ready.go:94] pod "kube-controller-manager-embed-certs-589812" is "Ready"
	I0904 06:54:24.955399 1794879 pod_ready.go:86] duration metric: took 185.06856ms for pod "kube-controller-manager-embed-certs-589812" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:25.155371 1794879 pod_ready.go:83] waiting for pod "kube-proxy-xqvlx" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:25.555016 1794879 pod_ready.go:94] pod "kube-proxy-xqvlx" is "Ready"
	I0904 06:54:25.555045 1794879 pod_ready.go:86] duration metric: took 399.644529ms for pod "kube-proxy-xqvlx" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:25.754864 1794879 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-589812" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:26.155740 1794879 pod_ready.go:94] pod "kube-scheduler-embed-certs-589812" is "Ready"
	I0904 06:54:26.155768 1794879 pod_ready.go:86] duration metric: took 400.874171ms for pod "kube-scheduler-embed-certs-589812" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:26.155779 1794879 pod_ready.go:40] duration metric: took 32.907618487s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 06:54:26.201526 1794879 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0904 06:54:26.203310 1794879 out.go:179] * Done! kubectl is now configured to use "embed-certs-589812" cluster and "default" namespace by default
	W0904 06:54:25.468180 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:27.468649 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:29.468703 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:31.967748 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:34.467966 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	I0904 06:54:36.468207 1796928 pod_ready.go:94] pod "coredns-66bc5c9577-hm47q" is "Ready"
	I0904 06:54:36.468238 1796928 pod_ready.go:86] duration metric: took 33.005546695s for pod "coredns-66bc5c9577-hm47q" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:36.470247 1796928 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-520775" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:36.474087 1796928 pod_ready.go:94] pod "etcd-default-k8s-diff-port-520775" is "Ready"
	I0904 06:54:36.474113 1796928 pod_ready.go:86] duration metric: took 3.802864ms for pod "etcd-default-k8s-diff-port-520775" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:36.476057 1796928 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-520775" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:36.479419 1796928 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-520775" is "Ready"
	I0904 06:54:36.479437 1796928 pod_ready.go:86] duration metric: took 3.359104ms for pod "kube-apiserver-default-k8s-diff-port-520775" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:36.481399 1796928 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-520775" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:36.666267 1796928 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-520775" is "Ready"
	I0904 06:54:36.666294 1796928 pod_ready.go:86] duration metric: took 184.873705ms for pod "kube-controller-manager-default-k8s-diff-port-520775" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:36.866510 1796928 pod_ready.go:83] waiting for pod "kube-proxy-zrlrh" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:37.266395 1796928 pod_ready.go:94] pod "kube-proxy-zrlrh" is "Ready"
	I0904 06:54:37.266428 1796928 pod_ready.go:86] duration metric: took 399.888589ms for pod "kube-proxy-zrlrh" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:37.466543 1796928 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-520775" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:37.866935 1796928 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-520775" is "Ready"
	I0904 06:54:37.866974 1796928 pod_ready.go:86] duration metric: took 400.403816ms for pod "kube-scheduler-default-k8s-diff-port-520775" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:37.866986 1796928 pod_ready.go:40] duration metric: took 34.408008083s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 06:54:37.912300 1796928 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0904 06:54:37.913920 1796928 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-520775" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 04 07:01:55 embed-certs-589812 crio[662]: time="2025-09-04 07:01:55.430073951Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=de0360e9-2f92-4eee-b2d2-cf7458c5a90d name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:01:56 embed-certs-589812 crio[662]: time="2025-09-04 07:01:56.430151146Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=bdfa3039-8b17-4559-9eaa-2c564ef1d3ac name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:01:56 embed-certs-589812 crio[662]: time="2025-09-04 07:01:56.430377941Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=bdfa3039-8b17-4559-9eaa-2c564ef1d3ac name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:02:07 embed-certs-589812 crio[662]: time="2025-09-04 07:02:07.429312173Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=a42d0d97-b1be-4dff-b84b-ed38a32e1c8f name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:02:07 embed-certs-589812 crio[662]: time="2025-09-04 07:02:07.429617446Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=a42d0d97-b1be-4dff-b84b-ed38a32e1c8f name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:02:09 embed-certs-589812 crio[662]: time="2025-09-04 07:02:09.429395573Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=f168c282-9ae6-40ae-8a9b-d64f86391205 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:02:09 embed-certs-589812 crio[662]: time="2025-09-04 07:02:09.429688838Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=f168c282-9ae6-40ae-8a9b-d64f86391205 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:02:20 embed-certs-589812 crio[662]: time="2025-09-04 07:02:20.430489447Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=a6ff25e1-043f-47da-a1e5-eacd7989c82a name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:02:20 embed-certs-589812 crio[662]: time="2025-09-04 07:02:20.430838838Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=a6ff25e1-043f-47da-a1e5-eacd7989c82a name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:02:20 embed-certs-589812 crio[662]: time="2025-09-04 07:02:20.431748166Z" level=info msg="Pulling image: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=6ea15e24-0916-417a-aecc-0872465e8ed0 name=/runtime.v1.ImageService/PullImage
	Sep 04 07:02:20 embed-certs-589812 crio[662]: time="2025-09-04 07:02:20.433256764Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 04 07:02:21 embed-certs-589812 crio[662]: time="2025-09-04 07:02:21.429396614Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=e0ca538c-eef0-4a8f-bfc2-620ec7dbc858 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:02:21 embed-certs-589812 crio[662]: time="2025-09-04 07:02:21.429789118Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=e0ca538c-eef0-4a8f-bfc2-620ec7dbc858 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:02:36 embed-certs-589812 crio[662]: time="2025-09-04 07:02:36.429869350Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=a44e6713-5d53-4f16-8de5-54fea5442394 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:02:36 embed-certs-589812 crio[662]: time="2025-09-04 07:02:36.430123322Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=a44e6713-5d53-4f16-8de5-54fea5442394 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:02:51 embed-certs-589812 crio[662]: time="2025-09-04 07:02:51.429797052Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=a8bc8a20-6b0d-479a-9944-9e2f8afaff19 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:02:51 embed-certs-589812 crio[662]: time="2025-09-04 07:02:51.430153061Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=a8bc8a20-6b0d-479a-9944-9e2f8afaff19 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:03:02 embed-certs-589812 crio[662]: time="2025-09-04 07:03:02.429322260Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=76727d7c-f602-4a83-9351-31b4a6cd30ce name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:03:02 embed-certs-589812 crio[662]: time="2025-09-04 07:03:02.429689685Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=76727d7c-f602-4a83-9351-31b4a6cd30ce name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:03:04 embed-certs-589812 crio[662]: time="2025-09-04 07:03:04.429428000Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=15998c0f-5593-4a67-b77f-fe4b26c28d1c name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:03:04 embed-certs-589812 crio[662]: time="2025-09-04 07:03:04.429754157Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=15998c0f-5593-4a67-b77f-fe4b26c28d1c name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:03:17 embed-certs-589812 crio[662]: time="2025-09-04 07:03:17.429369038Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=493f6e69-98bd-40eb-846d-687e11a3bd4c name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:03:17 embed-certs-589812 crio[662]: time="2025-09-04 07:03:17.429617655Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=e1bac48e-c1d5-439d-b340-eaf730ce25d9 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:03:17 embed-certs-589812 crio[662]: time="2025-09-04 07:03:17.429756895Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=493f6e69-98bd-40eb-846d-687e11a3bd4c name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:03:17 embed-certs-589812 crio[662]: time="2025-09-04 07:03:17.429915282Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=e1bac48e-c1d5-439d-b340-eaf730ce25d9 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	9b48d9ce849dd       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   3 minutes ago       Exited              dashboard-metrics-scraper   6                   8fe1865eb4dd6       dashboard-metrics-scraper-6ffb444bf9-4tbhb
	afa0e6ea9b635       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner         2                   15bf762c9c47a       storage-provisioner
	f107300c89141       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c   9 minutes ago       Running             busybox                     1                   a3d76d7c6a35f       busybox
	c522dae6d74af       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   9 minutes ago       Running             coredns                     1                   a4b2a1fb6cf3a       coredns-66bc5c9577-j5gww
	db5784a7ee37e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   9 minutes ago       Running             kindnet-cni                 1                   b71d4bdd51f0f       kindnet-wtgxv
	a36bc9cde6aab       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   9 minutes ago       Running             kube-proxy                  1                   f209ea9e0ae62       kube-proxy-xqvlx
	da3aa45c71a4c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Exited              storage-provisioner         1                   15bf762c9c47a       storage-provisioner
	02be4ef72489d       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   9 minutes ago       Running             kube-apiserver              1                   9c852c99349a6       kube-apiserver-embed-certs-589812
	9cafc6f062626       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   9 minutes ago       Running             kube-controller-manager     1                   a26cfa98a53a5       kube-controller-manager-embed-certs-589812
	919de7ee74e8f       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   9 minutes ago       Running             kube-scheduler              1                   f7aae24dad753       kube-scheduler-embed-certs-589812
	136e620f58d0d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   9 minutes ago       Running             etcd                        1                   c455e8a87f36f       etcd-embed-certs-589812
	
	
	==> coredns [c522dae6d74afb1a16f2a235b7bef26ec4cfd05d1b26ea73bc6aa1040ae84643] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53659 - 2854 "HINFO IN 7369246950217003682.7495171770416462074. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.036663369s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-589812
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-589812
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3fa37de45a2901b215fab008201edf72ce5a1ff
	                    minikube.k8s.io/name=embed-certs-589812
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_04T06_52_26_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Sep 2025 06:52:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-589812
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Sep 2025 07:03:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Sep 2025 07:02:49 +0000   Thu, 04 Sep 2025 06:52:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Sep 2025 07:02:49 +0000   Thu, 04 Sep 2025 06:52:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Sep 2025 07:02:49 +0000   Thu, 04 Sep 2025 06:52:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Sep 2025 07:02:49 +0000   Thu, 04 Sep 2025 06:53:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-589812
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 6ae8441598cb47a2ab529a010d5cacbb
	  System UUID:                9cb4d768-9a5f-4c82-9fd6-13f2aad0d14f
	  Boot ID:                    04ef57f1-30be-45a2-b84c-b20b1e806bda
	  Kernel Version:             5.15.0-1083-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-j5gww                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     10m
	  kube-system                 etcd-embed-certs-589812                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-wtgxv                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-embed-certs-589812             250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-embed-certs-589812    200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-xqvlx                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-embed-certs-589812             100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-746fcd58dc-prlxr               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         10m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-4tbhb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m33s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-wlwcq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   Starting                 9m35s                  kube-proxy       
	  Normal   Starting                 11m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node embed-certs-589812 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node embed-certs-589812 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x8 over 11m)      kubelet          Node embed-certs-589812 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    11m                    kubelet          Node embed-certs-589812 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 11m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  11m                    kubelet          Node embed-certs-589812 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     11m                    kubelet          Node embed-certs-589812 status is now: NodeHasSufficientPID
	  Normal   Starting                 11m                    kubelet          Starting kubelet.
	  Normal   RegisteredNode           10m                    node-controller  Node embed-certs-589812 event: Registered Node embed-certs-589812 in Controller
	  Normal   NodeReady                10m                    kubelet          Node embed-certs-589812 status is now: NodeReady
	  Normal   Starting                 9m43s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m43s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m43s (x8 over 9m43s)  kubelet          Node embed-certs-589812 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m43s (x8 over 9m43s)  kubelet          Node embed-certs-589812 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m43s (x8 over 9m43s)  kubelet          Node embed-certs-589812 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m34s                  node-controller  Node embed-certs-589812 event: Registered Node embed-certs-589812 in Controller
	
	
	==> dmesg <==
	[  +0.000001] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +2.011770] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000001] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.000003] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000001] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +1.535866] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-806214837f28
	[  +0.000006] ll header: 00000000: d2 82 15 b6 69 3f aa b7 ff 9e ed 42 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-806214837f28
	[  +0.000001] ll header: 00000000: d2 82 15 b6 69 3f aa b7 ff 9e ed 42 08 00
	[  +0.003918] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-806214837f28
	[  +0.000006] ll header: 00000000: d2 82 15 b6 69 3f aa b7 ff 9e ed 42 08 00
	[  +2.555764] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000006] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.000023] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000004] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000001] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +8.191102] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000008] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.003970] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000005] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000002] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	
	
	==> etcd [136e620f58d0da79daaa7f8118e790ac652690df1da4c027e49d29374f801e1d] <==
	{"level":"warn","ts":"2025-09-04T06:53:48.703186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:48.721417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:48.728418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:48.736406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:48.742640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:48.748826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:48.754719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:48.761109Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:48.800633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:48.820019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:48.826346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:48.832680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:48.839825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:48.846971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:48.853922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:48.861180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:48.868166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:48.875139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:48.881856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:48.889957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:48.896348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:48.927041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:48.933824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:48.940567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:48.992225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55996","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 07:03:28 up  4:45,  0 users,  load average: 0.41, 0.94, 1.50
	Linux embed-certs-589812 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [db5784a7ee37e8c68ba772498e333580e694587cb505ba865c6ea871e108f5a1] <==
	I0904 07:01:21.906490       1 main.go:301] handling current node
	I0904 07:01:31.909370       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0904 07:01:31.909401       1 main.go:301] handling current node
	I0904 07:01:41.907922       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0904 07:01:41.907956       1 main.go:301] handling current node
	I0904 07:01:51.906874       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0904 07:01:51.906909       1 main.go:301] handling current node
	I0904 07:02:01.907898       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0904 07:02:01.907958       1 main.go:301] handling current node
	I0904 07:02:11.907968       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0904 07:02:11.908023       1 main.go:301] handling current node
	I0904 07:02:21.906624       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0904 07:02:21.906656       1 main.go:301] handling current node
	I0904 07:02:31.907890       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0904 07:02:31.907943       1 main.go:301] handling current node
	I0904 07:02:41.907885       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0904 07:02:41.907916       1 main.go:301] handling current node
	I0904 07:02:51.908278       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0904 07:02:51.908317       1 main.go:301] handling current node
	I0904 07:03:01.913717       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0904 07:03:01.913761       1 main.go:301] handling current node
	I0904 07:03:11.907238       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0904 07:03:11.907277       1 main.go:301] handling current node
	I0904 07:03:21.906449       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0904 07:03:21.906493       1 main.go:301] handling current node
	
	
	==> kube-apiserver [02be4ef72489d4392f911e0670f92eed06830e855080845246dde88d6a655eb3] <==
	I0904 06:59:46.911535       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0904 06:59:50.715039       1 handler_proxy.go:99] no RequestInfo found in the context
	E0904 06:59:50.715082       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0904 06:59:50.715100       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0904 06:59:50.716225       1 handler_proxy.go:99] no RequestInfo found in the context
	E0904 06:59:50.716303       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0904 06:59:50.716314       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0904 07:00:33.995822       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 07:00:49.310563       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 07:01:36.030048       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0904 07:01:50.715709       1 handler_proxy.go:99] no RequestInfo found in the context
	E0904 07:01:50.715766       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0904 07:01:50.715781       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0904 07:01:50.716819       1 handler_proxy.go:99] no RequestInfo found in the context
	E0904 07:01:50.716909       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0904 07:01:50.716923       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0904 07:02:12.479577       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 07:02:42.419201       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 07:03:13.495631       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [9cafc6f06262606529257c56da917e67d347655c38b404f7c4cdc000c6f4a852] <==
	I0904 06:57:25.071738       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 06:57:55.050279       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 06:57:55.078919       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 06:58:25.054211       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 06:58:25.086070       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 06:58:55.059133       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 06:58:55.093187       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 06:59:25.063083       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 06:59:25.099674       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 06:59:55.067909       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 06:59:55.106843       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:00:25.072666       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:00:25.114537       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:00:55.077252       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:00:55.122785       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:01:25.082810       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:01:25.130485       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:01:55.087553       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:01:55.137541       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:02:25.092034       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:02:25.144533       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:02:55.097413       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:02:55.151738       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:03:25.103519       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:03:25.159445       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [a36bc9cde6aab4b8aa2805106724a69da61f56fe5d00554c661d19d13a4f6b93] <==
	I0904 06:53:51.725383       1 server_linux.go:53] "Using iptables proxy"
	I0904 06:53:51.861351       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0904 06:53:51.962353       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0904 06:53:51.962391       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E0904 06:53:51.962509       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0904 06:53:52.102819       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0904 06:53:52.102882       1 server_linux.go:132] "Using iptables Proxier"
	I0904 06:53:52.107255       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0904 06:53:52.107605       1 server.go:527] "Version info" version="v1.34.0"
	I0904 06:53:52.107632       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 06:53:52.110555       1 config.go:200] "Starting service config controller"
	I0904 06:53:52.110579       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0904 06:53:52.110600       1 config.go:106] "Starting endpoint slice config controller"
	I0904 06:53:52.110615       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0904 06:53:52.110635       1 config.go:403] "Starting serviceCIDR config controller"
	I0904 06:53:52.110640       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0904 06:53:52.110651       1 config.go:309] "Starting node config controller"
	I0904 06:53:52.110662       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0904 06:53:52.110669       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0904 06:53:52.211391       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0904 06:53:52.211437       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0904 06:53:52.211436       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [919de7ee74e8fee36f9de7bc074a0b27a2912e590e7d25095502ed862ce411a3] <==
	I0904 06:53:47.429971       1 serving.go:386] Generated self-signed cert in-memory
	W0904 06:53:49.700217       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0904 06:53:49.700386       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found]
	W0904 06:53:49.700434       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0904 06:53:49.700470       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0904 06:53:49.802079       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0904 06:53:49.802126       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 06:53:49.807337       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 06:53:49.807484       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 06:53:49.808677       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0904 06:53:49.809017       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0904 06:53:49.927006       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 04 07:02:46 embed-certs-589812 kubelet[811]: I0904 07:02:46.429166     811 scope.go:117] "RemoveContainer" containerID="9b48d9ce849dd90a46bca7ae681af400fe4b3c870e2f194e8353744cec4c75ac"
	Sep 04 07:02:46 embed-certs-589812 kubelet[811]: E0904 07:02:46.429384     811 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4tbhb_kubernetes-dashboard(f3c95b95-bd44-4fd4-8e19-a2d916fa0f62)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4tbhb" podUID="f3c95b95-bd44-4fd4-8e19-a2d916fa0f62"
	Sep 04 07:02:50 embed-certs-589812 kubelet[811]: E0904 07:02:50.520402     811 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 04 07:02:50 embed-certs-589812 kubelet[811]: E0904 07:02:50.520476     811 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 04 07:02:50 embed-certs-589812 kubelet[811]: E0904 07:02:50.520595     811 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-wlwcq_kubernetes-dashboard(ddf273f4-7295-4b47-a1af-b2f7c30d2f94): ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 04 07:02:50 embed-certs-589812 kubelet[811]: E0904 07:02:50.520644     811 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wlwcq" podUID="ddf273f4-7295-4b47-a1af-b2f7c30d2f94"
	Sep 04 07:02:51 embed-certs-589812 kubelet[811]: E0904 07:02:51.430476     811 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-prlxr" podUID="58b70501-6011-4b99-80ff-1f9b422ae481"
	Sep 04 07:02:55 embed-certs-589812 kubelet[811]: E0904 07:02:55.504984     811 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756969375504761354  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 07:02:55 embed-certs-589812 kubelet[811]: E0904 07:02:55.505023     811 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756969375504761354  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 07:03:01 embed-certs-589812 kubelet[811]: I0904 07:03:01.429257     811 scope.go:117] "RemoveContainer" containerID="9b48d9ce849dd90a46bca7ae681af400fe4b3c870e2f194e8353744cec4c75ac"
	Sep 04 07:03:01 embed-certs-589812 kubelet[811]: E0904 07:03:01.429492     811 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4tbhb_kubernetes-dashboard(f3c95b95-bd44-4fd4-8e19-a2d916fa0f62)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4tbhb" podUID="f3c95b95-bd44-4fd4-8e19-a2d916fa0f62"
	Sep 04 07:03:02 embed-certs-589812 kubelet[811]: E0904 07:03:02.430059     811 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wlwcq" podUID="ddf273f4-7295-4b47-a1af-b2f7c30d2f94"
	Sep 04 07:03:04 embed-certs-589812 kubelet[811]: E0904 07:03:04.430107     811 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-prlxr" podUID="58b70501-6011-4b99-80ff-1f9b422ae481"
	Sep 04 07:03:05 embed-certs-589812 kubelet[811]: E0904 07:03:05.506475     811 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756969385506209588  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 07:03:05 embed-certs-589812 kubelet[811]: E0904 07:03:05.506517     811 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756969385506209588  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 07:03:14 embed-certs-589812 kubelet[811]: I0904 07:03:14.428684     811 scope.go:117] "RemoveContainer" containerID="9b48d9ce849dd90a46bca7ae681af400fe4b3c870e2f194e8353744cec4c75ac"
	Sep 04 07:03:14 embed-certs-589812 kubelet[811]: E0904 07:03:14.428899     811 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4tbhb_kubernetes-dashboard(f3c95b95-bd44-4fd4-8e19-a2d916fa0f62)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4tbhb" podUID="f3c95b95-bd44-4fd4-8e19-a2d916fa0f62"
	Sep 04 07:03:15 embed-certs-589812 kubelet[811]: E0904 07:03:15.507631     811 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756969395507409781  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 07:03:15 embed-certs-589812 kubelet[811]: E0904 07:03:15.507672     811 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756969395507409781  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 07:03:17 embed-certs-589812 kubelet[811]: E0904 07:03:17.430123     811 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-prlxr" podUID="58b70501-6011-4b99-80ff-1f9b422ae481"
	Sep 04 07:03:17 embed-certs-589812 kubelet[811]: E0904 07:03:17.430130     811 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wlwcq" podUID="ddf273f4-7295-4b47-a1af-b2f7c30d2f94"
	Sep 04 07:03:25 embed-certs-589812 kubelet[811]: E0904 07:03:25.509043     811 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756969405508767258  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 07:03:25 embed-certs-589812 kubelet[811]: E0904 07:03:25.509080     811 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756969405508767258  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 07:03:26 embed-certs-589812 kubelet[811]: I0904 07:03:26.429303     811 scope.go:117] "RemoveContainer" containerID="9b48d9ce849dd90a46bca7ae681af400fe4b3c870e2f194e8353744cec4c75ac"
	Sep 04 07:03:26 embed-certs-589812 kubelet[811]: E0904 07:03:26.429493     811 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4tbhb_kubernetes-dashboard(f3c95b95-bd44-4fd4-8e19-a2d916fa0f62)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4tbhb" podUID="f3c95b95-bd44-4fd4-8e19-a2d916fa0f62"
	
	
	==> storage-provisioner [afa0e6ea9b635b90ae3047ad7a9771161aceb849079022cb4f3aa360b0ae3853] <==
	W0904 07:03:03.115094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:05.118396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:05.124423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:07.127695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:07.131810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:09.135199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:09.139378       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:11.143084       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:11.147062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:13.149884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:13.155079       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:15.158131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:15.162360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:17.165575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:17.169519       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:19.173053       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:19.178274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:21.180945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:21.185182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:23.188639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:23.193083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:25.196500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:25.200423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:27.204359       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:27.209150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [da3aa45c71a4c394a688ba0cada3665a08c23e51e587d98fad20c6d189740263] <==
	I0904 06:53:51.421931       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0904 06:54:21.424162       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-589812 -n embed-certs-589812
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-589812 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-prlxr kubernetes-dashboard-855c9754f9-wlwcq
helpers_test.go:282: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context embed-certs-589812 describe pod metrics-server-746fcd58dc-prlxr kubernetes-dashboard-855c9754f9-wlwcq
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context embed-certs-589812 describe pod metrics-server-746fcd58dc-prlxr kubernetes-dashboard-855c9754f9-wlwcq: exit status 1 (59.381718ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-prlxr" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-wlwcq" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context embed-certs-589812 describe pod metrics-server-746fcd58dc-prlxr kubernetes-dashboard-855c9754f9-wlwcq: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-f6t79" [c1e25916-a16a-4ee2-9aaa-895d41ffbe6e] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0904 06:57:34.786050 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/functional-856205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:57:40.414616 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:57:57.337152 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:59:31.715849 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/functional-856205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-520775 -n default-k8s-diff-port-520775
start_stop_delete_test.go:272: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-09-04 07:03:38.526552977 +0000 UTC m=+3809.315583606
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context default-k8s-diff-port-520775 describe po kubernetes-dashboard-855c9754f9-f6t79 -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context default-k8s-diff-port-520775 describe po kubernetes-dashboard-855c9754f9-f6t79 -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-f6t79
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             default-k8s-diff-port-520775/192.168.103.2
Start Time:       Thu, 04 Sep 2025 06:54:05 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5xrlz (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-5xrlz:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  9m33s                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-f6t79 to default-k8s-diff-port-520775
Normal   Pulling    4m39s (x5 over 9m32s)   kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     4m8s (x5 over 8m59s)    kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     4m8s (x5 over 8m59s)    kubelet            Error: ErrImagePull
Warning  Failed     2m55s (x16 over 8m58s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    110s (x21 over 8m58s)   kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context default-k8s-diff-port-520775 logs kubernetes-dashboard-855c9754f9-f6t79 -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-520775 logs kubernetes-dashboard-855c9754f9-f6t79 -n kubernetes-dashboard: exit status 1 (73.211632ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-f6t79" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context default-k8s-diff-port-520775 logs kubernetes-dashboard-855c9754f9-f6t79 -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-520775
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-520775:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "172df401119a1d42b8eb3beca2bdd06da22727a5a8bc01483c7bd45b54a9585b",
	        "Created": "2025-09-04T06:52:50.464909498Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1797115,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-04T06:53:49.68372136Z",
	            "FinishedAt": "2025-09-04T06:53:48.816578784Z"
	        },
	        "Image": "sha256:6f7d8b3ae805e64eb4efe058a75d43d384fe5989473cee7f8e24ea90eca28309",
	        "ResolvConfPath": "/var/lib/docker/containers/172df401119a1d42b8eb3beca2bdd06da22727a5a8bc01483c7bd45b54a9585b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/172df401119a1d42b8eb3beca2bdd06da22727a5a8bc01483c7bd45b54a9585b/hostname",
	        "HostsPath": "/var/lib/docker/containers/172df401119a1d42b8eb3beca2bdd06da22727a5a8bc01483c7bd45b54a9585b/hosts",
	        "LogPath": "/var/lib/docker/containers/172df401119a1d42b8eb3beca2bdd06da22727a5a8bc01483c7bd45b54a9585b/172df401119a1d42b8eb3beca2bdd06da22727a5a8bc01483c7bd45b54a9585b-json.log",
	        "Name": "/default-k8s-diff-port-520775",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-520775:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-520775",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "172df401119a1d42b8eb3beca2bdd06da22727a5a8bc01483c7bd45b54a9585b",
	                "LowerDir": "/var/lib/docker/overlay2/5e09d1bda7a40a6f708c59900f6a849375301dbcff052f63e4d5f72ca87335fc-init/diff:/var/lib/docker/overlay2/00af8677cb60c76ca825d07bd2d1267a5f0b2d8d1147a86a8eb7a1b8e0189af8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5e09d1bda7a40a6f708c59900f6a849375301dbcff052f63e4d5f72ca87335fc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5e09d1bda7a40a6f708c59900f6a849375301dbcff052f63e4d5f72ca87335fc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5e09d1bda7a40a6f708c59900f6a849375301dbcff052f63e4d5f72ca87335fc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-520775",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-520775/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-520775",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-520775",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-520775",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a0bd7abb2a2334b072b79979d645221e469a509371e8a05103678f543cac4ce5",
	            "SandboxKey": "/var/run/docker/netns/a0bd7abb2a23",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34279"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34280"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34283"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34281"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34282"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-520775": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:69:a6:d0:fa:c5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1e6b099093f4d2bc50dc9a105202a4f66367015ccdbff2e4084d5a24df38669d",
	                    "EndpointID": "ecbf99d262604c276b491ddb13ca849ee24efef7e85cb28c75d854b4b7cd0be3",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-520775",
	                        "172df401119a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-520775 -n default-k8s-diff-port-520775
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-520775 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-520775 logs -n 25: (1.203012829s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p no-preload-574576 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                       │ no-preload-574576            │ jenkins │ v1.36.0 │ 04 Sep 25 06:49 UTC │ 04 Sep 25 06:50 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-869290 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-869290       │ jenkins │ v1.36.0 │ 04 Sep 25 06:50 UTC │ 04 Sep 25 06:50 UTC │
	│ stop    │ -p old-k8s-version-869290 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-869290       │ jenkins │ v1.36.0 │ 04 Sep 25 06:50 UTC │ 04 Sep 25 06:50 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-869290 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-869290       │ jenkins │ v1.36.0 │ 04 Sep 25 06:50 UTC │ 04 Sep 25 06:50 UTC │
	│ start   │ -p old-k8s-version-869290 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-869290       │ jenkins │ v1.36.0 │ 04 Sep 25 06:50 UTC │ 04 Sep 25 06:51 UTC │
	│ addons  │ enable metrics-server -p no-preload-574576 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-574576            │ jenkins │ v1.36.0 │ 04 Sep 25 06:50 UTC │ 04 Sep 25 06:50 UTC │
	│ stop    │ -p no-preload-574576 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-574576            │ jenkins │ v1.36.0 │ 04 Sep 25 06:50 UTC │ 04 Sep 25 06:51 UTC │
	│ addons  │ enable dashboard -p no-preload-574576 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-574576            │ jenkins │ v1.36.0 │ 04 Sep 25 06:51 UTC │ 04 Sep 25 06:51 UTC │
	│ start   │ -p no-preload-574576 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                       │ no-preload-574576            │ jenkins │ v1.36.0 │ 04 Sep 25 06:51 UTC │ 04 Sep 25 06:51 UTC │
	│ start   │ -p cert-expiration-620042 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-620042       │ jenkins │ v1.36.0 │ 04 Sep 25 06:51 UTC │ 04 Sep 25 06:51 UTC │
	│ delete  │ -p cert-expiration-620042                                                                                                                                                                                                                     │ cert-expiration-620042       │ jenkins │ v1.36.0 │ 04 Sep 25 06:51 UTC │ 04 Sep 25 06:52 UTC │
	│ start   │ -p embed-certs-589812 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                        │ embed-certs-589812           │ jenkins │ v1.36.0 │ 04 Sep 25 06:52 UTC │ 04 Sep 25 06:53 UTC │
	│ start   │ -p kubernetes-upgrade-892549 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-892549    │ jenkins │ v1.36.0 │ 04 Sep 25 06:52 UTC │                     │
	│ start   │ -p kubernetes-upgrade-892549 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-892549    │ jenkins │ v1.36.0 │ 04 Sep 25 06:52 UTC │ 04 Sep 25 06:52 UTC │
	│ delete  │ -p kubernetes-upgrade-892549                                                                                                                                                                                                                  │ kubernetes-upgrade-892549    │ jenkins │ v1.36.0 │ 04 Sep 25 06:52 UTC │ 04 Sep 25 06:52 UTC │
	│ delete  │ -p disable-driver-mounts-393542                                                                                                                                                                                                               │ disable-driver-mounts-393542 │ jenkins │ v1.36.0 │ 04 Sep 25 06:52 UTC │ 04 Sep 25 06:52 UTC │
	│ start   │ -p default-k8s-diff-port-520775 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-520775 │ jenkins │ v1.36.0 │ 04 Sep 25 06:52 UTC │ 04 Sep 25 06:53 UTC │
	│ addons  │ enable metrics-server -p embed-certs-589812 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-589812           │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:53 UTC │
	│ stop    │ -p embed-certs-589812 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-589812           │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:53 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-520775 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-520775 │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:53 UTC │
	│ stop    │ -p default-k8s-diff-port-520775 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-520775 │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:53 UTC │
	│ addons  │ enable dashboard -p embed-certs-589812 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-589812           │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:53 UTC │
	│ start   │ -p embed-certs-589812 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                        │ embed-certs-589812           │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:54 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-520775 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-520775 │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:53 UTC │
	│ start   │ -p default-k8s-diff-port-520775 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-520775 │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:54 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/04 06:53:49
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 06:53:49.418555 1796928 out.go:360] Setting OutFile to fd 1 ...
	I0904 06:53:49.418725 1796928 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:53:49.418774 1796928 out.go:374] Setting ErrFile to fd 2...
	I0904 06:53:49.418785 1796928 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:53:49.419117 1796928 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1516970/.minikube/bin
	I0904 06:53:49.419985 1796928 out.go:368] Setting JSON to false
	I0904 06:53:49.421632 1796928 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":16579,"bootTime":1756952250,"procs":289,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 06:53:49.421749 1796928 start.go:140] virtualization: kvm guest
	I0904 06:53:49.423972 1796928 out.go:179] * [default-k8s-diff-port-520775] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 06:53:49.425842 1796928 notify.go:220] Checking for updates...
	I0904 06:53:49.425850 1796928 out.go:179]   - MINIKUBE_LOCATION=21409
	I0904 06:53:49.427436 1796928 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 06:53:49.428783 1796928 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-1516970/kubeconfig
	I0904 06:53:49.429989 1796928 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1516970/.minikube
	I0904 06:53:49.431134 1796928 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 06:53:49.432406 1796928 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 06:53:49.434250 1796928 config.go:182] Loaded profile config "default-k8s-diff-port-520775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 06:53:49.435089 1796928 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 06:53:49.462481 1796928 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0904 06:53:49.462577 1796928 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 06:53:49.536244 1796928 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:74 SystemTime:2025-09-04 06:53:49.525128821 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 06:53:49.536390 1796928 docker.go:318] overlay module found
	I0904 06:53:49.539526 1796928 out.go:179] * Using the docker driver based on existing profile
	I0904 06:53:49.540719 1796928 start.go:304] selected driver: docker
	I0904 06:53:49.540734 1796928 start.go:918] validating driver "docker" against &{Name:default-k8s-diff-port-520775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-520775 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountS
tring: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 06:53:49.540822 1796928 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 06:53:49.541681 1796928 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 06:53:49.594566 1796928 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:74 SystemTime:2025-09-04 06:53:49.585030944 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 06:53:49.595064 1796928 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 06:53:49.595111 1796928 cni.go:84] Creating CNI manager for ""
	I0904 06:53:49.595174 1796928 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0904 06:53:49.595223 1796928 start.go:348] cluster config:
	{Name:default-k8s-diff-port-520775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-520775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 06:53:49.597216 1796928 out.go:179] * Starting "default-k8s-diff-port-520775" primary control-plane node in "default-k8s-diff-port-520775" cluster
	I0904 06:53:49.598401 1796928 cache.go:123] Beginning downloading kic base image for docker with crio
	I0904 06:53:49.599526 1796928 out.go:179] * Pulling base image v0.0.47-1756936034-21409 ...
	I0904 06:53:49.604882 1796928 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 06:53:49.604957 1796928 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21409-1516970/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0904 06:53:49.604977 1796928 cache.go:58] Caching tarball of preloaded images
	I0904 06:53:49.604992 1796928 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc in local docker daemon
	I0904 06:53:49.605104 1796928 preload.go:172] Found /home/jenkins/minikube-integration/21409-1516970/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0904 06:53:49.605123 1796928 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0904 06:53:49.605341 1796928 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/config.json ...
	I0904 06:53:49.637613 1796928 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc in local docker daemon, skipping pull
	I0904 06:53:49.637635 1796928 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc exists in daemon, skipping load
	I0904 06:53:49.637647 1796928 cache.go:232] Successfully downloaded all kic artifacts
	I0904 06:53:49.637673 1796928 start.go:360] acquireMachinesLock for default-k8s-diff-port-520775: {Name:mkd2b36988a85f8d5c3a19497a99007da8aadae2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 06:53:49.637729 1796928 start.go:364] duration metric: took 33.006µs to acquireMachinesLock for "default-k8s-diff-port-520775"
	I0904 06:53:49.637749 1796928 start.go:96] Skipping create...Using existing machine configuration
	I0904 06:53:49.637756 1796928 fix.go:54] fixHost starting: 
	I0904 06:53:49.637963 1796928 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-520775 --format={{.State.Status}}
	I0904 06:53:49.656941 1796928 fix.go:112] recreateIfNeeded on default-k8s-diff-port-520775: state=Stopped err=<nil>
	W0904 06:53:49.656986 1796928 fix.go:138] unexpected machine state, will restart: <nil>
	I0904 06:53:49.524554 1794879 node_ready.go:49] node "embed-certs-589812" is "Ready"
	I0904 06:53:49.524655 1794879 node_ready.go:38] duration metric: took 3.407781482s for node "embed-certs-589812" to be "Ready" ...
	I0904 06:53:49.524688 1794879 api_server.go:52] waiting for apiserver process to appear ...
	I0904 06:53:49.524773 1794879 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 06:53:51.714274 1794879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.110482825s)
	I0904 06:53:51.714323 1794879 addons.go:479] Verifying addon metrics-server=true in "embed-certs-589812"
	I0904 06:53:51.714427 1794879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.971633666s)
	I0904 06:53:51.714457 1794879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (4.901617894s)
	I0904 06:53:51.714590 1794879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.702133151s)
	I0904 06:53:51.714600 1794879 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.189780106s)
	I0904 06:53:51.714619 1794879 api_server.go:72] duration metric: took 5.87883589s to wait for apiserver process to appear ...
	I0904 06:53:51.714626 1794879 api_server.go:88] waiting for apiserver healthz status ...
	I0904 06:53:51.714643 1794879 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0904 06:53:51.716342 1794879 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-589812 addons enable metrics-server
	
	I0904 06:53:51.722283 1794879 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 06:53:51.722308 1794879 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 06:53:51.730360 1794879 out.go:179] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	I0904 06:53:51.731942 1794879 addons.go:514] duration metric: took 5.89615636s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I0904 06:53:52.215034 1794879 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0904 06:53:52.219745 1794879 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 06:53:52.219786 1794879 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 06:53:52.715125 1794879 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0904 06:53:52.719686 1794879 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 06:53:52.719714 1794879 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 06:53:53.215303 1794879 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0904 06:53:53.219535 1794879 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0904 06:53:53.220593 1794879 api_server.go:141] control plane version: v1.34.0
	I0904 06:53:53.220626 1794879 api_server.go:131] duration metric: took 1.505992813s to wait for apiserver health ...
	I0904 06:53:53.220641 1794879 system_pods.go:43] waiting for kube-system pods to appear ...
	I0904 06:53:53.224544 1794879 system_pods.go:59] 9 kube-system pods found
	I0904 06:53:53.224588 1794879 system_pods.go:61] "coredns-66bc5c9577-j5gww" [e3612616-edf7-408c-8d20-966c456e4a43] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 06:53:53.224605 1794879 system_pods.go:61] "etcd-embed-certs-589812" [ffde7899-36bf-4837-8a40-30b11624fd40] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0904 06:53:53.224618 1794879 system_pods.go:61] "kindnet-wtgxv" [7570cefc-495d-4c68-83e5-04a04d12775a] Running
	I0904 06:53:53.224628 1794879 system_pods.go:61] "kube-apiserver-embed-certs-589812" [095a13f2-431a-46bd-a6b2-d9f475bd60cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 06:53:53.224640 1794879 system_pods.go:61] "kube-controller-manager-embed-certs-589812" [25e8105c-95a2-4761-a9a6-3e01225cde8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0904 06:53:53.224650 1794879 system_pods.go:61] "kube-proxy-xqvlx" [281c6535-72f3-429b-b4b1-df56cb3de2f5] Running
	I0904 06:53:53.224659 1794879 system_pods.go:61] "kube-scheduler-embed-certs-589812" [dbb61597-bbca-422b-b8b6-45821409cb91] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0904 06:53:53.224682 1794879 system_pods.go:61] "metrics-server-746fcd58dc-prlxr" [58b70501-6011-4b99-80ff-1f9b422ae481] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 06:53:53.224694 1794879 system_pods.go:61] "storage-provisioner" [df8bd0bd-3bd4-461e-b276-edf75af8897e] Running
	I0904 06:53:53.224704 1794879 system_pods.go:74] duration metric: took 4.053609ms to wait for pod list to return data ...
	I0904 06:53:53.224716 1794879 default_sa.go:34] waiting for default service account to be created ...
	I0904 06:53:53.227290 1794879 default_sa.go:45] found service account: "default"
	I0904 06:53:53.227311 1794879 default_sa.go:55] duration metric: took 2.585826ms for default service account to be created ...
	I0904 06:53:53.227319 1794879 system_pods.go:116] waiting for k8s-apps to be running ...
	I0904 06:53:53.230112 1794879 system_pods.go:86] 9 kube-system pods found
	I0904 06:53:53.230142 1794879 system_pods.go:89] "coredns-66bc5c9577-j5gww" [e3612616-edf7-408c-8d20-966c456e4a43] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 06:53:53.230154 1794879 system_pods.go:89] "etcd-embed-certs-589812" [ffde7899-36bf-4837-8a40-30b11624fd40] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0904 06:53:53.230162 1794879 system_pods.go:89] "kindnet-wtgxv" [7570cefc-495d-4c68-83e5-04a04d12775a] Running
	I0904 06:53:53.230172 1794879 system_pods.go:89] "kube-apiserver-embed-certs-589812" [095a13f2-431a-46bd-a6b2-d9f475bd60cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 06:53:53.230180 1794879 system_pods.go:89] "kube-controller-manager-embed-certs-589812" [25e8105c-95a2-4761-a9a6-3e01225cde8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0904 06:53:53.230191 1794879 system_pods.go:89] "kube-proxy-xqvlx" [281c6535-72f3-429b-b4b1-df56cb3de2f5] Running
	I0904 06:53:53.230201 1794879 system_pods.go:89] "kube-scheduler-embed-certs-589812" [dbb61597-bbca-422b-b8b6-45821409cb91] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0904 06:53:53.230212 1794879 system_pods.go:89] "metrics-server-746fcd58dc-prlxr" [58b70501-6011-4b99-80ff-1f9b422ae481] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 06:53:53.230218 1794879 system_pods.go:89] "storage-provisioner" [df8bd0bd-3bd4-461e-b276-edf75af8897e] Running
	I0904 06:53:53.230227 1794879 system_pods.go:126] duration metric: took 2.90283ms to wait for k8s-apps to be running ...
	I0904 06:53:53.230240 1794879 system_svc.go:44] waiting for kubelet service to be running ....
	I0904 06:53:53.230287 1794879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 06:53:53.241829 1794879 system_svc.go:56] duration metric: took 11.584133ms WaitForService to wait for kubelet
	I0904 06:53:53.241853 1794879 kubeadm.go:578] duration metric: took 7.406070053s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 06:53:53.241869 1794879 node_conditions.go:102] verifying NodePressure condition ...
	I0904 06:53:53.244406 1794879 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0904 06:53:53.244445 1794879 node_conditions.go:123] node cpu capacity is 8
	I0904 06:53:53.244459 1794879 node_conditions.go:105] duration metric: took 2.584951ms to run NodePressure ...
	I0904 06:53:53.244478 1794879 start.go:241] waiting for startup goroutines ...
	I0904 06:53:53.244492 1794879 start.go:246] waiting for cluster config update ...
	I0904 06:53:53.244509 1794879 start.go:255] writing updated cluster config ...
	I0904 06:53:53.244784 1794879 ssh_runner.go:195] Run: rm -f paused
	I0904 06:53:53.248131 1794879 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 06:53:53.251511 1794879 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-j5gww" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:53:49.659280 1796928 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-520775" ...
	I0904 06:53:49.659366 1796928 cli_runner.go:164] Run: docker start default-k8s-diff-port-520775
	I0904 06:53:49.944765 1796928 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-520775 --format={{.State.Status}}
	I0904 06:53:49.965484 1796928 kic.go:430] container "default-k8s-diff-port-520775" state is running.
	I0904 06:53:49.965966 1796928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-520775
	I0904 06:53:49.984536 1796928 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/config.json ...
	I0904 06:53:49.984754 1796928 machine.go:93] provisionDockerMachine start ...
	I0904 06:53:49.984828 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:50.006739 1796928 main.go:141] libmachine: Using SSH client type: native
	I0904 06:53:50.007122 1796928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 34279 <nil> <nil>}
	I0904 06:53:50.007149 1796928 main.go:141] libmachine: About to run SSH command:
	hostname
	I0904 06:53:50.011282 1796928 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0904 06:53:53.135459 1796928 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-520775
	
	I0904 06:53:53.135490 1796928 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-520775"
	I0904 06:53:53.135560 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:53.153046 1796928 main.go:141] libmachine: Using SSH client type: native
	I0904 06:53:53.153307 1796928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 34279 <nil> <nil>}
	I0904 06:53:53.153323 1796928 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-520775 && echo "default-k8s-diff-port-520775" | sudo tee /etc/hostname
	I0904 06:53:53.284177 1796928 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-520775
	
	I0904 06:53:53.284278 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:53.302854 1796928 main.go:141] libmachine: Using SSH client type: native
	I0904 06:53:53.303062 1796928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 34279 <nil> <nil>}
	I0904 06:53:53.303082 1796928 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-520775' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-520775/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-520775' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0904 06:53:53.428269 1796928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 06:53:53.428306 1796928 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-1516970/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-1516970/.minikube}
	I0904 06:53:53.428357 1796928 ubuntu.go:190] setting up certificates
	I0904 06:53:53.428381 1796928 provision.go:84] configureAuth start
	I0904 06:53:53.428449 1796928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-520775
	I0904 06:53:53.447935 1796928 provision.go:143] copyHostCerts
	I0904 06:53:53.448036 1796928 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.pem, removing ...
	I0904 06:53:53.448051 1796928 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.pem
	I0904 06:53:53.448113 1796928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.pem (1082 bytes)
	I0904 06:53:53.448215 1796928 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1516970/.minikube/cert.pem, removing ...
	I0904 06:53:53.448223 1796928 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1516970/.minikube/cert.pem
	I0904 06:53:53.448247 1796928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-1516970/.minikube/cert.pem (1123 bytes)
	I0904 06:53:53.448320 1796928 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1516970/.minikube/key.pem, removing ...
	I0904 06:53:53.448326 1796928 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1516970/.minikube/key.pem
	I0904 06:53:53.448347 1796928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-1516970/.minikube/key.pem (1675 bytes)
	I0904 06:53:53.448409 1796928 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-520775 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-520775 localhost minikube]
	I0904 06:53:53.540900 1796928 provision.go:177] copyRemoteCerts
	I0904 06:53:53.540966 1796928 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0904 06:53:53.541003 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:53.558727 1796928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/default-k8s-diff-port-520775/id_rsa Username:docker}
	I0904 06:53:53.650335 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0904 06:53:53.677813 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0904 06:53:53.700987 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0904 06:53:53.724318 1796928 provision.go:87] duration metric: took 295.918548ms to configureAuth
	I0904 06:53:53.724345 1796928 ubuntu.go:206] setting minikube options for container-runtime
	I0904 06:53:53.724529 1796928 config.go:182] Loaded profile config "default-k8s-diff-port-520775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 06:53:53.724626 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:53.743241 1796928 main.go:141] libmachine: Using SSH client type: native
	I0904 06:53:53.743467 1796928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 34279 <nil> <nil>}
	I0904 06:53:53.743488 1796928 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0904 06:53:54.045106 1796928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0904 06:53:54.045134 1796928 machine.go:96] duration metric: took 4.060362432s to provisionDockerMachine
	I0904 06:53:54.045148 1796928 start.go:293] postStartSetup for "default-k8s-diff-port-520775" (driver="docker")
	I0904 06:53:54.045187 1796928 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0904 06:53:54.045256 1796928 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0904 06:53:54.045307 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:54.064198 1796928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/default-k8s-diff-port-520775/id_rsa Username:docker}
	I0904 06:53:54.152873 1796928 ssh_runner.go:195] Run: cat /etc/os-release
	I0904 06:53:54.156293 1796928 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0904 06:53:54.156319 1796928 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0904 06:53:54.156326 1796928 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0904 06:53:54.156333 1796928 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0904 06:53:54.156345 1796928 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1516970/.minikube/addons for local assets ...
	I0904 06:53:54.156399 1796928 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1516970/.minikube/files for local assets ...
	I0904 06:53:54.156481 1796928 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-1516970/.minikube/files/etc/ssl/certs/15207162.pem -> 15207162.pem in /etc/ssl/certs
	I0904 06:53:54.156610 1796928 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0904 06:53:54.165073 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/files/etc/ssl/certs/15207162.pem --> /etc/ssl/certs/15207162.pem (1708 bytes)
	I0904 06:53:54.187780 1796928 start.go:296] duration metric: took 142.614938ms for postStartSetup
	I0904 06:53:54.187887 1796928 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 06:53:54.187937 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:54.205683 1796928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/default-k8s-diff-port-520775/id_rsa Username:docker}
	I0904 06:53:54.292859 1796928 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0904 06:53:54.297265 1796928 fix.go:56] duration metric: took 4.65950064s for fixHost
	I0904 06:53:54.297289 1796928 start.go:83] releasing machines lock for "default-k8s-diff-port-520775", held for 4.659549727s
	I0904 06:53:54.297358 1796928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-520775
	I0904 06:53:54.315327 1796928 ssh_runner.go:195] Run: cat /version.json
	I0904 06:53:54.315393 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:54.315420 1796928 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0904 06:53:54.315484 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:54.335338 1796928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/default-k8s-diff-port-520775/id_rsa Username:docker}
	I0904 06:53:54.336109 1796928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/default-k8s-diff-port-520775/id_rsa Username:docker}
	I0904 06:53:54.493584 1796928 ssh_runner.go:195] Run: systemctl --version
	I0904 06:53:54.498345 1796928 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0904 06:53:54.638467 1796928 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0904 06:53:54.642924 1796928 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 06:53:54.652284 1796928 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0904 06:53:54.652347 1796928 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 06:53:54.660849 1796928 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0904 06:53:54.660875 1796928 start.go:495] detecting cgroup driver to use...
	I0904 06:53:54.660913 1796928 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0904 06:53:54.660966 1796928 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0904 06:53:54.672418 1796928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 06:53:54.683134 1796928 docker.go:218] disabling cri-docker service (if available) ...
	I0904 06:53:54.683181 1796928 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0904 06:53:54.695400 1796928 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0904 06:53:54.706646 1796928 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0904 06:53:54.793740 1796928 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0904 06:53:54.873854 1796928 docker.go:234] disabling docker service ...
	I0904 06:53:54.873933 1796928 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0904 06:53:54.885885 1796928 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0904 06:53:54.896737 1796928 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0904 06:53:54.980788 1796928 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0904 06:53:55.057730 1796928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0904 06:53:55.068310 1796928 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 06:53:55.083683 1796928 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0904 06:53:55.083736 1796928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:53:55.093158 1796928 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0904 06:53:55.093215 1796928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:53:55.102672 1796928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:53:55.113082 1796928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:53:55.122399 1796928 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0904 06:53:55.131334 1796928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:53:55.140602 1796928 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:53:55.150009 1796928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:53:55.159908 1796928 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0904 06:53:55.167649 1796928 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0904 06:53:55.175680 1796928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 06:53:55.254239 1796928 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0904 06:53:55.362926 1796928 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0904 06:53:55.363001 1796928 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0904 06:53:55.366648 1796928 start.go:563] Will wait 60s for crictl version
	I0904 06:53:55.366695 1796928 ssh_runner.go:195] Run: which crictl
	I0904 06:53:55.369962 1796928 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0904 06:53:55.403453 1796928 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0904 06:53:55.403538 1796928 ssh_runner.go:195] Run: crio --version
	I0904 06:53:55.441474 1796928 ssh_runner.go:195] Run: crio --version
	I0904 06:53:55.479608 1796928 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0904 06:53:55.480915 1796928 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-520775 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0904 06:53:55.497935 1796928 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0904 06:53:55.502150 1796928 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 06:53:55.514295 1796928 kubeadm.go:875] updating cluster {Name:default-k8s-diff-port-520775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-520775 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0904 06:53:55.514485 1796928 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 06:53:55.514556 1796928 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 06:53:55.564218 1796928 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 06:53:55.564245 1796928 crio.go:433] Images already preloaded, skipping extraction
	I0904 06:53:55.564292 1796928 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 06:53:55.602409 1796928 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 06:53:55.602436 1796928 cache_images.go:85] Images are preloaded, skipping loading
	I0904 06:53:55.602446 1796928 kubeadm.go:926] updating node { 192.168.103.2 8444 v1.34.0 crio true true} ...
	I0904 06:53:55.602577 1796928 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-520775 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-520775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0904 06:53:55.602645 1796928 ssh_runner.go:195] Run: crio config
	I0904 06:53:55.664543 1796928 cni.go:84] Creating CNI manager for ""
	I0904 06:53:55.664570 1796928 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0904 06:53:55.664584 1796928 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0904 06:53:55.664612 1796928 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-520775 NodeName:default-k8s-diff-port-520775 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0904 06:53:55.664768 1796928 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-520775"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0904 06:53:55.664845 1796928 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0904 06:53:55.673590 1796928 binaries.go:44] Found k8s binaries, skipping transfer
	I0904 06:53:55.673661 1796928 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0904 06:53:55.682016 1796928 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0904 06:53:55.699448 1796928 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0904 06:53:55.717472 1796928 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I0904 06:53:55.734579 1796928 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0904 06:53:55.737941 1796928 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 06:53:55.748899 1796928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 06:53:55.834506 1796928 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 06:53:55.848002 1796928 certs.go:68] Setting up /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775 for IP: 192.168.103.2
	I0904 06:53:55.848028 1796928 certs.go:194] generating shared ca certs ...
	I0904 06:53:55.848048 1796928 certs.go:226] acquiring lock for ca certs: {Name:mk2d06825c36f44304767b415a9a93c84edb2667 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:53:55.848186 1796928 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.key
	I0904 06:53:55.848228 1796928 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/proxy-client-ca.key
	I0904 06:53:55.848237 1796928 certs.go:256] generating profile certs ...
	I0904 06:53:55.848310 1796928 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/client.key
	I0904 06:53:55.848365 1796928 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/apiserver.key.6ec15110
	I0904 06:53:55.848406 1796928 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/proxy-client.key
	I0904 06:53:55.848517 1796928 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/1520716.pem (1338 bytes)
	W0904 06:53:55.848547 1796928 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/1520716_empty.pem, impossibly tiny 0 bytes
	I0904 06:53:55.848556 1796928 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca-key.pem (1675 bytes)
	I0904 06:53:55.848578 1796928 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca.pem (1082 bytes)
	I0904 06:53:55.848601 1796928 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/cert.pem (1123 bytes)
	I0904 06:53:55.848627 1796928 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/key.pem (1675 bytes)
	I0904 06:53:55.848669 1796928 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/files/etc/ssl/certs/15207162.pem (1708 bytes)
	I0904 06:53:55.849251 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0904 06:53:55.876639 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0904 06:53:55.904012 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0904 06:53:55.936371 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0904 06:53:56.018233 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0904 06:53:56.041340 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0904 06:53:56.065911 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0904 06:53:56.089737 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0904 06:53:56.112935 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/files/etc/ssl/certs/15207162.pem --> /usr/share/ca-certificates/15207162.pem (1708 bytes)
	I0904 06:53:56.138060 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0904 06:53:56.162385 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/1520716.pem --> /usr/share/ca-certificates/1520716.pem (1338 bytes)
	I0904 06:53:56.185546 1796928 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0904 06:53:56.202891 1796928 ssh_runner.go:195] Run: openssl version
	I0904 06:53:56.208611 1796928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15207162.pem && ln -fs /usr/share/ca-certificates/15207162.pem /etc/ssl/certs/15207162.pem"
	I0904 06:53:56.219865 1796928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15207162.pem
	I0904 06:53:56.223785 1796928 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  4 06:07 /usr/share/ca-certificates/15207162.pem
	I0904 06:53:56.223867 1796928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15207162.pem
	I0904 06:53:56.231657 1796928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15207162.pem /etc/ssl/certs/3ec20f2e.0"
	I0904 06:53:56.243527 1796928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0904 06:53:56.253334 1796928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0904 06:53:56.257449 1796928 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  4 06:00 /usr/share/ca-certificates/minikubeCA.pem
	I0904 06:53:56.257517 1796928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0904 06:53:56.264253 1796928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0904 06:53:56.273629 1796928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1520716.pem && ln -fs /usr/share/ca-certificates/1520716.pem /etc/ssl/certs/1520716.pem"
	I0904 06:53:56.283120 1796928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1520716.pem
	I0904 06:53:56.286378 1796928 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  4 06:07 /usr/share/ca-certificates/1520716.pem
	I0904 06:53:56.286450 1796928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1520716.pem
	I0904 06:53:56.293207 1796928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1520716.pem /etc/ssl/certs/51391683.0"
	I0904 06:53:56.301668 1796928 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0904 06:53:56.308006 1796928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0904 06:53:56.315155 1796928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0904 06:53:56.322059 1796928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0904 06:53:56.329568 1796928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0904 06:53:56.337737 1796928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0904 06:53:56.345511 1796928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0904 06:53:56.353351 1796928 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-520775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-520775 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 06:53:56.353482 1796928 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0904 06:53:56.353539 1796928 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0904 06:53:56.397941 1796928 cri.go:89] found id: ""
	I0904 06:53:56.398012 1796928 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0904 06:53:56.408886 1796928 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0904 06:53:56.408981 1796928 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0904 06:53:56.409041 1796928 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0904 06:53:56.424530 1796928 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0904 06:53:56.425727 1796928 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-520775" does not appear in /home/jenkins/minikube-integration/21409-1516970/kubeconfig
	I0904 06:53:56.426580 1796928 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-1516970/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-520775" cluster setting kubeconfig missing "default-k8s-diff-port-520775" context setting]
	I0904 06:53:56.427949 1796928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1516970/kubeconfig: {Name:mkbb6a6dae4a65ddd44b276a5562dc0a264116a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:53:56.430031 1796928 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0904 06:53:56.444430 1796928 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.103.2
	I0904 06:53:56.444470 1796928 kubeadm.go:593] duration metric: took 35.478353ms to restartPrimaryControlPlane
	I0904 06:53:56.444481 1796928 kubeadm.go:394] duration metric: took 91.143305ms to StartCluster
	I0904 06:53:56.444503 1796928 settings.go:142] acquiring lock: {Name:mk2d1c8a569b62879275d6daa2b799b595d6bca2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:53:56.444560 1796928 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-1516970/kubeconfig
	I0904 06:53:56.447245 1796928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1516970/kubeconfig: {Name:mkbb6a6dae4a65ddd44b276a5562dc0a264116a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:53:56.447495 1796928 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0904 06:53:56.447711 1796928 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0904 06:53:56.447836 1796928 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-520775"
	I0904 06:53:56.447860 1796928 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-520775"
	W0904 06:53:56.447868 1796928 addons.go:247] addon storage-provisioner should already be in state true
	I0904 06:53:56.447888 1796928 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-520775"
	I0904 06:53:56.447903 1796928 host.go:66] Checking if "default-k8s-diff-port-520775" exists ...
	I0904 06:53:56.447928 1796928 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-520775"
	I0904 06:53:56.447921 1796928 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-520775"
	I0904 06:53:56.447939 1796928 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-520775"
	I0904 06:53:56.447970 1796928 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-520775"
	I0904 06:53:56.447970 1796928 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-520775"
	W0904 06:53:56.447979 1796928 addons.go:247] addon dashboard should already be in state true
	I0904 06:53:56.447980 1796928 config.go:182] Loaded profile config "default-k8s-diff-port-520775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	W0904 06:53:56.447982 1796928 addons.go:247] addon metrics-server should already be in state true
	I0904 06:53:56.448017 1796928 host.go:66] Checking if "default-k8s-diff-port-520775" exists ...
	I0904 06:53:56.448020 1796928 host.go:66] Checking if "default-k8s-diff-port-520775" exists ...
	I0904 06:53:56.448276 1796928 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-520775 --format={{.State.Status}}
	I0904 06:53:56.448431 1796928 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-520775 --format={{.State.Status}}
	I0904 06:53:56.448473 1796928 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-520775 --format={{.State.Status}}
	I0904 06:53:56.448520 1796928 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-520775 --format={{.State.Status}}
	I0904 06:53:56.450093 1796928 out.go:179] * Verifying Kubernetes components...
	I0904 06:53:56.451389 1796928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 06:53:56.482390 1796928 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-520775"
	W0904 06:53:56.482412 1796928 addons.go:247] addon default-storageclass should already be in state true
	I0904 06:53:56.482437 1796928 host.go:66] Checking if "default-k8s-diff-port-520775" exists ...
	I0904 06:53:56.482730 1796928 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-520775 --format={{.State.Status}}
	I0904 06:53:56.485071 1796928 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 06:53:56.485089 1796928 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0904 06:53:56.488270 1796928 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 06:53:56.488294 1796928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0904 06:53:56.488355 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:56.490382 1796928 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0904 06:53:56.491521 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0904 06:53:56.491536 1796928 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0904 06:53:56.491584 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:56.496773 1796928 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	W0904 06:53:55.257485 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:53:57.757496 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	I0904 06:53:56.497920 1796928 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0904 06:53:56.497941 1796928 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0904 06:53:56.498005 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:56.511983 1796928 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0904 06:53:56.512010 1796928 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0904 06:53:56.512072 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:56.529596 1796928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/default-k8s-diff-port-520775/id_rsa Username:docker}
	I0904 06:53:56.531423 1796928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/default-k8s-diff-port-520775/id_rsa Username:docker}
	I0904 06:53:56.543761 1796928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/default-k8s-diff-port-520775/id_rsa Username:docker}
	I0904 06:53:56.547939 1796928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/default-k8s-diff-port-520775/id_rsa Username:docker}
	I0904 06:53:56.815518 1796928 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 06:53:56.824564 1796928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 06:53:56.900475 1796928 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-520775" to be "Ready" ...
	I0904 06:53:56.903122 1796928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0904 06:53:56.915401 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0904 06:53:56.915439 1796928 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0904 06:53:57.011674 1796928 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0904 06:53:57.011705 1796928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0904 06:53:57.025890 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0904 06:53:57.025929 1796928 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0904 06:53:57.130640 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0904 06:53:57.130669 1796928 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0904 06:53:57.201935 1796928 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0904 06:53:57.201971 1796928 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	W0904 06:53:57.228446 1796928 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 06:53:57.228496 1796928 retry.go:31] will retry after 331.542893ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0904 06:53:57.228576 1796928 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 06:53:57.228595 1796928 retry.go:31] will retry after 234.661911ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 06:53:57.233201 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0904 06:53:57.233235 1796928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0904 06:53:57.312449 1796928 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0904 06:53:57.312483 1796928 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0904 06:53:57.335196 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0904 06:53:57.335296 1796928 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0904 06:53:57.340794 1796928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0904 06:53:57.423747 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0904 06:53:57.423855 1796928 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0904 06:53:57.464378 1796928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0904 06:53:57.517739 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0904 06:53:57.517836 1796928 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0904 06:53:57.560380 1796928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 06:53:57.621494 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0904 06:53:57.621580 1796928 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0904 06:53:57.719817 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0904 06:53:57.719851 1796928 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0904 06:53:57.808921 1796928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0904 06:54:00.222294 1796928 node_ready.go:49] node "default-k8s-diff-port-520775" is "Ready"
	I0904 06:54:00.222393 1796928 node_ready.go:38] duration metric: took 3.321861305s for node "default-k8s-diff-port-520775" to be "Ready" ...
	I0904 06:54:00.222414 1796928 api_server.go:52] waiting for apiserver process to appear ...
	I0904 06:54:00.222514 1796928 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 06:54:02.420531 1796928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.07964965s)
	I0904 06:54:02.420574 1796928 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-520775"
	I0904 06:54:02.420586 1796928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (4.956118872s)
	I0904 06:54:02.420682 1796928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.860244874s)
	I0904 06:54:02.420925 1796928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.611964012s)
	I0904 06:54:02.420956 1796928 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.198413181s)
	I0904 06:54:02.421147 1796928 api_server.go:72] duration metric: took 5.973615373s to wait for apiserver process to appear ...
	I0904 06:54:02.421161 1796928 api_server.go:88] waiting for apiserver healthz status ...
	I0904 06:54:02.421181 1796928 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I0904 06:54:02.422911 1796928 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-520775 addons enable metrics-server
	
	I0904 06:54:02.426397 1796928 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 06:54:02.426463 1796928 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 06:54:02.428576 1796928 out.go:179] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	W0904 06:53:59.759069 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:54:02.258100 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	I0904 06:54:02.429861 1796928 addons.go:514] duration metric: took 5.982154586s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I0904 06:54:02.921448 1796928 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I0904 06:54:02.926218 1796928 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 06:54:02.926239 1796928 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 06:54:03.421924 1796928 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I0904 06:54:03.427035 1796928 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I0904 06:54:03.428103 1796928 api_server.go:141] control plane version: v1.34.0
	I0904 06:54:03.428127 1796928 api_server.go:131] duration metric: took 1.006959868s to wait for apiserver health ...
	I0904 06:54:03.428136 1796928 system_pods.go:43] waiting for kube-system pods to appear ...
	I0904 06:54:03.434471 1796928 system_pods.go:59] 9 kube-system pods found
	I0904 06:54:03.434508 1796928 system_pods.go:61] "coredns-66bc5c9577-hm47q" [e73fad8a-ad1b-475f-a4ea-bfda49587ae8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 06:54:03.434519 1796928 system_pods.go:61] "etcd-default-k8s-diff-port-520775" [5829ac4b-ff8b-4d46-9be9-0947be850651] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0904 06:54:03.434525 1796928 system_pods.go:61] "kindnet-wz7lg" [8e231614-2126-4bd8-b77d-a4e98bfbcd0b] Running
	I0904 06:54:03.434533 1796928 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-520775" [95d6a6b9-81f2-48b3-8343-289600b99b2a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 06:54:03.434544 1796928 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-520775" [69053048-8fce-4b4b-8df8-a8f7415bf602] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0904 06:54:03.434564 1796928 system_pods.go:61] "kube-proxy-zrlrh" [df5878ee-bf16-4a99-894c-1f83484bbc3b] Running
	I0904 06:54:03.434573 1796928 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-520775" [e52ed283-6545-4336-8d7a-e26c18f54b4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0904 06:54:03.434586 1796928 system_pods.go:61] "metrics-server-746fcd58dc-gws8j" [16bf9326-2429-4d6b-a6ed-6dc44262c35e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 06:54:03.434594 1796928 system_pods.go:61] "storage-provisioner" [0f88021c-f0ad-4130-8cb1-06f073f45244] Running
	I0904 06:54:03.434602 1796928 system_pods.go:74] duration metric: took 6.460113ms to wait for pod list to return data ...
	I0904 06:54:03.434614 1796928 default_sa.go:34] waiting for default service account to be created ...
	I0904 06:54:03.437095 1796928 default_sa.go:45] found service account: "default"
	I0904 06:54:03.437116 1796928 default_sa.go:55] duration metric: took 2.49678ms for default service account to be created ...
	I0904 06:54:03.437124 1796928 system_pods.go:116] waiting for k8s-apps to be running ...
	I0904 06:54:03.439954 1796928 system_pods.go:86] 9 kube-system pods found
	I0904 06:54:03.439997 1796928 system_pods.go:89] "coredns-66bc5c9577-hm47q" [e73fad8a-ad1b-475f-a4ea-bfda49587ae8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 06:54:03.440010 1796928 system_pods.go:89] "etcd-default-k8s-diff-port-520775" [5829ac4b-ff8b-4d46-9be9-0947be850651] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0904 06:54:03.440018 1796928 system_pods.go:89] "kindnet-wz7lg" [8e231614-2126-4bd8-b77d-a4e98bfbcd0b] Running
	I0904 06:54:03.440029 1796928 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-520775" [95d6a6b9-81f2-48b3-8343-289600b99b2a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 06:54:03.440043 1796928 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-520775" [69053048-8fce-4b4b-8df8-a8f7415bf602] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0904 06:54:03.440053 1796928 system_pods.go:89] "kube-proxy-zrlrh" [df5878ee-bf16-4a99-894c-1f83484bbc3b] Running
	I0904 06:54:03.440060 1796928 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-520775" [e52ed283-6545-4336-8d7a-e26c18f54b4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0904 06:54:03.440072 1796928 system_pods.go:89] "metrics-server-746fcd58dc-gws8j" [16bf9326-2429-4d6b-a6ed-6dc44262c35e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 06:54:03.440078 1796928 system_pods.go:89] "storage-provisioner" [0f88021c-f0ad-4130-8cb1-06f073f45244] Running
	I0904 06:54:03.440085 1796928 system_pods.go:126] duration metric: took 2.955ms to wait for k8s-apps to be running ...
	I0904 06:54:03.440100 1796928 system_svc.go:44] waiting for kubelet service to be running ....
	I0904 06:54:03.440162 1796928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 06:54:03.451705 1796928 system_svc.go:56] duration metric: took 11.594555ms WaitForService to wait for kubelet
	I0904 06:54:03.451731 1796928 kubeadm.go:578] duration metric: took 7.004201759s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 06:54:03.451748 1796928 node_conditions.go:102] verifying NodePressure condition ...
	I0904 06:54:03.455005 1796928 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0904 06:54:03.455036 1796928 node_conditions.go:123] node cpu capacity is 8
	I0904 06:54:03.455062 1796928 node_conditions.go:105] duration metric: took 3.308068ms to run NodePressure ...
	I0904 06:54:03.455079 1796928 start.go:241] waiting for startup goroutines ...
	I0904 06:54:03.455095 1796928 start.go:246] waiting for cluster config update ...
	I0904 06:54:03.455112 1796928 start.go:255] writing updated cluster config ...
	I0904 06:54:03.455408 1796928 ssh_runner.go:195] Run: rm -f paused
	I0904 06:54:03.458944 1796928 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 06:54:03.462665 1796928 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hm47q" in "kube-system" namespace to be "Ready" or be gone ...
	W0904 06:54:04.757792 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:54:07.257591 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:54:05.468478 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:07.500893 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:09.756895 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:54:12.257352 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:54:09.968652 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:12.468012 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:14.756854 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:54:17.256905 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:54:14.468746 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:16.967726 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:18.968373 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:19.257325 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:54:21.757694 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:54:20.968633 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:23.467871 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:24.256489 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	I0904 06:54:24.756710 1794879 pod_ready.go:94] pod "coredns-66bc5c9577-j5gww" is "Ready"
	I0904 06:54:24.756744 1794879 pod_ready.go:86] duration metric: took 31.505206553s for pod "coredns-66bc5c9577-j5gww" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:24.759357 1794879 pod_ready.go:83] waiting for pod "etcd-embed-certs-589812" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:24.763174 1794879 pod_ready.go:94] pod "etcd-embed-certs-589812" is "Ready"
	I0904 06:54:24.763194 1794879 pod_ready.go:86] duration metric: took 3.815458ms for pod "etcd-embed-certs-589812" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:24.765056 1794879 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-589812" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:24.768709 1794879 pod_ready.go:94] pod "kube-apiserver-embed-certs-589812" is "Ready"
	I0904 06:54:24.768729 1794879 pod_ready.go:86] duration metric: took 3.655905ms for pod "kube-apiserver-embed-certs-589812" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:24.770312 1794879 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-589812" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:24.955369 1794879 pod_ready.go:94] pod "kube-controller-manager-embed-certs-589812" is "Ready"
	I0904 06:54:24.955399 1794879 pod_ready.go:86] duration metric: took 185.06856ms for pod "kube-controller-manager-embed-certs-589812" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:25.155371 1794879 pod_ready.go:83] waiting for pod "kube-proxy-xqvlx" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:25.555016 1794879 pod_ready.go:94] pod "kube-proxy-xqvlx" is "Ready"
	I0904 06:54:25.555045 1794879 pod_ready.go:86] duration metric: took 399.644529ms for pod "kube-proxy-xqvlx" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:25.754864 1794879 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-589812" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:26.155740 1794879 pod_ready.go:94] pod "kube-scheduler-embed-certs-589812" is "Ready"
	I0904 06:54:26.155768 1794879 pod_ready.go:86] duration metric: took 400.874171ms for pod "kube-scheduler-embed-certs-589812" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:26.155779 1794879 pod_ready.go:40] duration metric: took 32.907618487s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 06:54:26.201526 1794879 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0904 06:54:26.203310 1794879 out.go:179] * Done! kubectl is now configured to use "embed-certs-589812" cluster and "default" namespace by default
	W0904 06:54:25.468180 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:27.468649 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:29.468703 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:31.967748 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:34.467966 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	I0904 06:54:36.468207 1796928 pod_ready.go:94] pod "coredns-66bc5c9577-hm47q" is "Ready"
	I0904 06:54:36.468238 1796928 pod_ready.go:86] duration metric: took 33.005546695s for pod "coredns-66bc5c9577-hm47q" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:36.470247 1796928 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-520775" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:36.474087 1796928 pod_ready.go:94] pod "etcd-default-k8s-diff-port-520775" is "Ready"
	I0904 06:54:36.474113 1796928 pod_ready.go:86] duration metric: took 3.802864ms for pod "etcd-default-k8s-diff-port-520775" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:36.476057 1796928 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-520775" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:36.479419 1796928 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-520775" is "Ready"
	I0904 06:54:36.479437 1796928 pod_ready.go:86] duration metric: took 3.359104ms for pod "kube-apiserver-default-k8s-diff-port-520775" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:36.481399 1796928 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-520775" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:36.666267 1796928 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-520775" is "Ready"
	I0904 06:54:36.666294 1796928 pod_ready.go:86] duration metric: took 184.873705ms for pod "kube-controller-manager-default-k8s-diff-port-520775" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:36.866510 1796928 pod_ready.go:83] waiting for pod "kube-proxy-zrlrh" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:37.266395 1796928 pod_ready.go:94] pod "kube-proxy-zrlrh" is "Ready"
	I0904 06:54:37.266428 1796928 pod_ready.go:86] duration metric: took 399.888589ms for pod "kube-proxy-zrlrh" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:37.466543 1796928 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-520775" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:37.866935 1796928 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-520775" is "Ready"
	I0904 06:54:37.866974 1796928 pod_ready.go:86] duration metric: took 400.403816ms for pod "kube-scheduler-default-k8s-diff-port-520775" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:37.866986 1796928 pod_ready.go:40] duration metric: took 34.408008083s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 06:54:37.912300 1796928 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0904 06:54:37.913920 1796928 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-520775" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 04 07:02:13 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:02:13.944326088Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=9990eaa1-3809-4924-8219-494d24d88c6a name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:02:13 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:02:13.944937715Z" level=info msg="Pulling image: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=e01a8ff4-a700-4c4f-a751-d23c12f5900d name=/runtime.v1.ImageService/PullImage
	Sep 04 07:02:13 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:02:13.950692168Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 04 07:02:17 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:02:17.944303986Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=0e1e2632-03f0-4d89-9e4b-bf3ca80b38df name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:02:17 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:02:17.944580575Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=0e1e2632-03f0-4d89-9e4b-bf3ca80b38df name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:02:29 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:02:29.944170304Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=954e2a58-7ef4-4ebb-a772-a163a41f502f name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:02:29 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:02:29.944452375Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=954e2a58-7ef4-4ebb-a772-a163a41f502f name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:02:43 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:02:43.944385356Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=5056537f-c627-46de-8c0f-132abbe0ad0d name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:02:43 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:02:43.944683896Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=5056537f-c627-46de-8c0f-132abbe0ad0d name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:02:56 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:02:56.944196928Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=9e50aca7-4eef-4473-badd-eb2f78ccd33d name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:02:56 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:02:56.944427840Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=9e50aca7-4eef-4473-badd-eb2f78ccd33d name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:02:58 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:02:58.944125451Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=eb34f865-1084-4ed2-b2f1-51800d4df290 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:02:58 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:02:58.944518249Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=eb34f865-1084-4ed2-b2f1-51800d4df290 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:03:07 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:03:07.944452036Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=35235df0-ebfa-4117-965a-5a66b3f9fccb name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:03:07 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:03:07.944747234Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=35235df0-ebfa-4117-965a-5a66b3f9fccb name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:03:09 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:03:09.944168066Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=4546f067-bb73-4bba-88b8-c626fc7ea688 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:03:09 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:03:09.944477380Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=4546f067-bb73-4bba-88b8-c626fc7ea688 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:03:19 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:03:19.944481080Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=2d27cedc-5ee6-43dc-a876-77a535ba99c6 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:03:19 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:03:19.944771675Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=2d27cedc-5ee6-43dc-a876-77a535ba99c6 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:03:22 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:03:22.943613463Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=250cf828-cba4-4d42-a4c9-f8e2eab47fa5 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:03:22 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:03:22.943911427Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=250cf828-cba4-4d42-a4c9-f8e2eab47fa5 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:03:32 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:03:32.944386158Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=9815126d-2f2f-4980-b15b-2b0698217059 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:03:32 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:03:32.944601413Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=9815126d-2f2f-4980-b15b-2b0698217059 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:03:35 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:03:35.944658827Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=9d442aba-21aa-4ee7-bbf2-5ed90952fb61 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:03:35 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:03:35.945006293Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=9d442aba-21aa-4ee7-bbf2-5ed90952fb61 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	fb9628eaf3b0f       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   3 minutes ago       Exited              dashboard-metrics-scraper   6                   858e1880ccc57       dashboard-metrics-scraper-6ffb444bf9-w8cp6
	651dd18c7303d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner         2                   37eaa3ccd86c8       storage-provisioner
	8dd957a92d643       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c   9 minutes ago       Running             busybox                     1                   2acea6d36b1ec       busybox
	12896fe744d8a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   9 minutes ago       Running             coredns                     1                   02869ebade6ec       coredns-66bc5c9577-hm47q
	177f8ea7a363c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Exited              storage-provisioner         1                   37eaa3ccd86c8       storage-provisioner
	67fd4be4663f3       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   9 minutes ago       Running             kindnet-cni                 1                   3083df93634b4       kindnet-wz7lg
	0cb99392ff213       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   9 minutes ago       Running             kube-proxy                  1                   9b45d085cf127       kube-proxy-zrlrh
	1def9424a8c38       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   9 minutes ago       Running             kube-scheduler              1                   66edf7d7784d8       kube-scheduler-default-k8s-diff-port-520775
	b657ea960e3b6       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   9 minutes ago       Running             kube-apiserver              1                   b194f0de9e5a8       kube-apiserver-default-k8s-diff-port-520775
	c3ca0bd7fce1d       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   9 minutes ago       Running             kube-controller-manager     1                   4bfc430eb71cd       kube-controller-manager-default-k8s-diff-port-520775
	fe9e18633ad68       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   9 minutes ago       Running             etcd                        1                   c5248de068b11       etcd-default-k8s-diff-port-520775
	
	
	==> coredns [12896fe744d8a440ab362f6ae7d00d19681e226f2e50d29a6a3e061bc755d6a0] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53125 - 56639 "HINFO IN 505985679635397038.1776096097812087659. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.020458499s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-520775
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-520775
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3fa37de45a2901b215fab008201edf72ce5a1ff
	                    minikube.k8s.io/name=default-k8s-diff-port-520775
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_04T06_53_08_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Sep 2025 06:53:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-520775
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Sep 2025 07:03:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Sep 2025 07:02:30 +0000   Thu, 04 Sep 2025 06:53:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Sep 2025 07:02:30 +0000   Thu, 04 Sep 2025 06:53:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Sep 2025 07:02:30 +0000   Thu, 04 Sep 2025 06:53:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Sep 2025 07:02:30 +0000   Thu, 04 Sep 2025 06:53:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-520775
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 f2cc88600c8c4e1c895ddae82a9d3dfe
	  System UUID:                17e666a0-ae84-4286-9b81-3776014bb3a5
	  Boot ID:                    04ef57f1-30be-45a2-b84c-b20b1e806bda
	  Kernel Version:             5.15.0-1083-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-hm47q                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     10m
	  kube-system                 etcd-default-k8s-diff-port-520775                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kindnet-wz7lg                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-default-k8s-diff-port-520775             250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-520775    200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-zrlrh                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-default-k8s-diff-port-520775             100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-746fcd58dc-gws8j                         100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         10m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-w8cp6              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m34s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-f6t79                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   Starting                 9m37s                  kube-proxy       
	  Normal   NodeHasSufficientPID     10m                    kubelet          Node default-k8s-diff-port-520775 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 10m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m                    kubelet          Node default-k8s-diff-port-520775 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m                    kubelet          Node default-k8s-diff-port-520775 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 10m                    kubelet          Starting kubelet.
	  Normal   RegisteredNode           10m                    node-controller  Node default-k8s-diff-port-520775 event: Registered Node default-k8s-diff-port-520775 in Controller
	  Normal   NodeReady                10m                    kubelet          Node default-k8s-diff-port-520775 status is now: NodeReady
	  Normal   Starting                 9m44s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m44s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m43s (x8 over 9m43s)  kubelet          Node default-k8s-diff-port-520775 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m43s (x8 over 9m43s)  kubelet          Node default-k8s-diff-port-520775 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m43s (x8 over 9m43s)  kubelet          Node default-k8s-diff-port-520775 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m34s                  node-controller  Node default-k8s-diff-port-520775 event: Registered Node default-k8s-diff-port-520775 in Controller
	
	
	==> dmesg <==
	[  +0.000001] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +2.011770] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000001] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.000003] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000001] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +1.535866] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-806214837f28
	[  +0.000006] ll header: 00000000: d2 82 15 b6 69 3f aa b7 ff 9e ed 42 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-806214837f28
	[  +0.000001] ll header: 00000000: d2 82 15 b6 69 3f aa b7 ff 9e ed 42 08 00
	[  +0.003918] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-806214837f28
	[  +0.000006] ll header: 00000000: d2 82 15 b6 69 3f aa b7 ff 9e ed 42 08 00
	[  +2.555764] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000006] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.000023] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000004] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000001] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +8.191102] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000008] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.003970] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000005] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000002] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	
	
	==> etcd [fe9e18633ad685a5e18223d4de6fa0bd95b9ff7a556105fd4cc0b9449f68f31c] <==
	{"level":"warn","ts":"2025-09-04T06:53:59.116816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:59.124439Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:59.131759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:59.141655Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:59.148570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:59.156560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:59.204893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:59.212086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:59.226666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:59.248025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:59.257756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:59.265112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:59.274577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:59.299986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:59.308112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:59.319642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:59.325226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:59.332851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:59.355962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:59.401215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:59.408770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:59.438675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:59.445447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:59.454772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:59.507091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45634","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 07:03:39 up  4:46,  0 users,  load average: 0.47, 0.93, 1.48
	Linux default-k8s-diff-port-520775 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [67fd4be4663f399a5ab71bec17ea18252f8bdac63c94a8b38f9892bedf5e6ebd] <==
	I0904 07:01:32.514470       1 main.go:301] handling current node
	I0904 07:01:42.515918       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0904 07:01:42.515957       1 main.go:301] handling current node
	I0904 07:01:52.512699       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0904 07:01:52.512733       1 main.go:301] handling current node
	I0904 07:02:02.518705       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0904 07:02:02.518733       1 main.go:301] handling current node
	I0904 07:02:12.515908       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0904 07:02:12.515955       1 main.go:301] handling current node
	I0904 07:02:22.513841       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0904 07:02:22.513877       1 main.go:301] handling current node
	I0904 07:02:32.509747       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0904 07:02:32.509813       1 main.go:301] handling current node
	I0904 07:02:42.515569       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0904 07:02:42.515609       1 main.go:301] handling current node
	I0904 07:02:52.511925       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0904 07:02:52.511957       1 main.go:301] handling current node
	I0904 07:03:02.512378       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0904 07:03:02.512408       1 main.go:301] handling current node
	I0904 07:03:12.510430       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0904 07:03:12.510471       1 main.go:301] handling current node
	I0904 07:03:22.512322       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0904 07:03:22.512363       1 main.go:301] handling current node
	I0904 07:03:32.511892       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0904 07:03:32.511923       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b657ea960e3b6bcf1c194db3a320f280623b711353707a906b9aa137fbb3678d] <==
	I0904 06:59:18.705807       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0904 07:00:01.231826       1 handler_proxy.go:99] no RequestInfo found in the context
	E0904 07:00:01.231881       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0904 07:00:01.231898       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0904 07:00:01.232206       1 handler_proxy.go:99] no RequestInfo found in the context
	E0904 07:00:01.232286       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0904 07:00:01.233444       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0904 07:00:23.317590       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 07:00:39.344380       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 07:01:45.702267       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 07:01:48.393865       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0904 07:02:01.232053       1 handler_proxy.go:99] no RequestInfo found in the context
	E0904 07:02:01.232102       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0904 07:02:01.232119       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0904 07:02:01.233834       1 handler_proxy.go:99] no RequestInfo found in the context
	E0904 07:02:01.233915       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0904 07:02:01.233927       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0904 07:02:47.382880       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 07:03:16.327266       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [c3ca0bd7fce1d06b880bcd74e973b0fba7c77720f38d0d574df75a25383a8c46] <==
	I0904 06:57:35.498321       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 06:58:05.461071       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 06:58:05.505133       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 06:58:35.465798       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 06:58:35.511783       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 06:59:05.470345       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 06:59:05.519204       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 06:59:35.475130       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 06:59:35.526015       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:00:05.480478       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:00:05.533409       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:00:35.485569       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:00:35.541530       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:01:05.490010       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:01:05.549028       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:01:35.493847       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:01:35.556250       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:02:05.498825       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:02:05.564065       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:02:35.504174       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:02:35.571336       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:03:05.509919       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:03:05.577846       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:03:35.514980       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:03:35.584590       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [0cb99392ff213a29f74c574a1f464514f40d13fb8b2ad415260fbe656f861f78] <==
	I0904 06:54:02.313807       1 server_linux.go:53] "Using iptables proxy"
	I0904 06:54:02.481309       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0904 06:54:02.581712       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0904 06:54:02.581749       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E0904 06:54:02.581851       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0904 06:54:02.702806       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0904 06:54:02.702870       1 server_linux.go:132] "Using iptables Proxier"
	I0904 06:54:02.707621       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0904 06:54:02.708314       1 server.go:527] "Version info" version="v1.34.0"
	I0904 06:54:02.708370       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 06:54:02.709852       1 config.go:200] "Starting service config controller"
	I0904 06:54:02.709885       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0904 06:54:02.709884       1 config.go:403] "Starting serviceCIDR config controller"
	I0904 06:54:02.709909       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0904 06:54:02.709994       1 config.go:106] "Starting endpoint slice config controller"
	I0904 06:54:02.710022       1 config.go:309] "Starting node config controller"
	I0904 06:54:02.710031       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0904 06:54:02.710024       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0904 06:54:02.810407       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0904 06:54:02.810421       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0904 06:54:02.810433       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0904 06:54:02.810456       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1def9424a8c382f52727704fa488898d6b4bf4fb2cc4750aa640e9abba2caeef] <==
	I0904 06:54:00.312077       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 06:54:00.316820       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0904 06:54:00.317029       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 06:54:00.317632       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 06:54:00.317057       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0904 06:54:00.418138       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0904 06:54:00.418315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0904 06:54:00.418419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0904 06:54:00.418459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0904 06:54:00.418505       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0904 06:54:00.418619       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0904 06:54:00.418691       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0904 06:54:00.418610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0904 06:54:00.418834       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0904 06:54:00.418851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0904 06:54:00.419014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0904 06:54:00.419027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0904 06:54:00.419173       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0904 06:54:00.419173       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0904 06:54:00.419220       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0904 06:54:00.419334       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0904 06:54:00.419440       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0904 06:54:00.419604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0904 06:54:00.420531       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I0904 06:54:01.717847       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 04 07:02:54 default-k8s-diff-port-520775 kubelet[808]: E0904 07:02:54.944277     808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-w8cp6_kubernetes-dashboard(964b57fc-3542-48a2-a344-ab740188dfea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w8cp6" podUID="964b57fc-3542-48a2-a344-ab740188dfea"
	Sep 04 07:02:56 default-k8s-diff-port-520775 kubelet[808]: E0904 07:02:56.088433     808 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756969376088186527  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 07:02:56 default-k8s-diff-port-520775 kubelet[808]: E0904 07:02:56.088468     808 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756969376088186527  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 07:02:56 default-k8s-diff-port-520775 kubelet[808]: E0904 07:02:56.944816     808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-gws8j" podUID="16bf9326-2429-4d6b-a6ed-6dc44262c35e"
	Sep 04 07:02:58 default-k8s-diff-port-520775 kubelet[808]: E0904 07:02:58.944906     808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-f6t79" podUID="c1e25916-a16a-4ee2-9aaa-895d41ffbe6e"
	Sep 04 07:03:06 default-k8s-diff-port-520775 kubelet[808]: E0904 07:03:06.089994     808 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756969386089747346  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 07:03:06 default-k8s-diff-port-520775 kubelet[808]: E0904 07:03:06.090040     808 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756969386089747346  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 07:03:06 default-k8s-diff-port-520775 kubelet[808]: I0904 07:03:06.943170     808 scope.go:117] "RemoveContainer" containerID="fb9628eaf3b0f22147312294a550f681e5c0987d95b6e273a712b43e2e662544"
	Sep 04 07:03:06 default-k8s-diff-port-520775 kubelet[808]: E0904 07:03:06.943350     808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-w8cp6_kubernetes-dashboard(964b57fc-3542-48a2-a344-ab740188dfea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w8cp6" podUID="964b57fc-3542-48a2-a344-ab740188dfea"
	Sep 04 07:03:07 default-k8s-diff-port-520775 kubelet[808]: E0904 07:03:07.945066     808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-gws8j" podUID="16bf9326-2429-4d6b-a6ed-6dc44262c35e"
	Sep 04 07:03:09 default-k8s-diff-port-520775 kubelet[808]: E0904 07:03:09.944894     808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-f6t79" podUID="c1e25916-a16a-4ee2-9aaa-895d41ffbe6e"
	Sep 04 07:03:16 default-k8s-diff-port-520775 kubelet[808]: E0904 07:03:16.091660     808 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756969396091404115  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 07:03:16 default-k8s-diff-port-520775 kubelet[808]: E0904 07:03:16.091707     808 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756969396091404115  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 07:03:19 default-k8s-diff-port-520775 kubelet[808]: E0904 07:03:19.945047     808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-gws8j" podUID="16bf9326-2429-4d6b-a6ed-6dc44262c35e"
	Sep 04 07:03:20 default-k8s-diff-port-520775 kubelet[808]: I0904 07:03:20.943718     808 scope.go:117] "RemoveContainer" containerID="fb9628eaf3b0f22147312294a550f681e5c0987d95b6e273a712b43e2e662544"
	Sep 04 07:03:20 default-k8s-diff-port-520775 kubelet[808]: E0904 07:03:20.943954     808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-w8cp6_kubernetes-dashboard(964b57fc-3542-48a2-a344-ab740188dfea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w8cp6" podUID="964b57fc-3542-48a2-a344-ab740188dfea"
	Sep 04 07:03:22 default-k8s-diff-port-520775 kubelet[808]: E0904 07:03:22.944259     808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-f6t79" podUID="c1e25916-a16a-4ee2-9aaa-895d41ffbe6e"
	Sep 04 07:03:26 default-k8s-diff-port-520775 kubelet[808]: E0904 07:03:26.093272     808 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756969406093033232  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 07:03:26 default-k8s-diff-port-520775 kubelet[808]: E0904 07:03:26.093315     808 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756969406093033232  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 07:03:32 default-k8s-diff-port-520775 kubelet[808]: I0904 07:03:32.944054     808 scope.go:117] "RemoveContainer" containerID="fb9628eaf3b0f22147312294a550f681e5c0987d95b6e273a712b43e2e662544"
	Sep 04 07:03:32 default-k8s-diff-port-520775 kubelet[808]: E0904 07:03:32.944228     808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-w8cp6_kubernetes-dashboard(964b57fc-3542-48a2-a344-ab740188dfea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w8cp6" podUID="964b57fc-3542-48a2-a344-ab740188dfea"
	Sep 04 07:03:32 default-k8s-diff-port-520775 kubelet[808]: E0904 07:03:32.944923     808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-gws8j" podUID="16bf9326-2429-4d6b-a6ed-6dc44262c35e"
	Sep 04 07:03:35 default-k8s-diff-port-520775 kubelet[808]: E0904 07:03:35.945365     808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-f6t79" podUID="c1e25916-a16a-4ee2-9aaa-895d41ffbe6e"
	Sep 04 07:03:36 default-k8s-diff-port-520775 kubelet[808]: E0904 07:03:36.095228     808 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756969416094975726  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 07:03:36 default-k8s-diff-port-520775 kubelet[808]: E0904 07:03:36.095267     808 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756969416094975726  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	
	
	==> storage-provisioner [177f8ea7a363c3c3b050aea14ac0273afcac9985a9fe1621523044d67f709d9a] <==
	I0904 06:54:02.308824       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0904 06:54:32.311321       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [651dd18c7303d259954eb0ef6f0d2406a279376559ba295730ef62f148ff5b40] <==
	W0904 07:03:14.622550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:16.626383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:16.632307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:18.635445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:18.639523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:20.643200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:20.648223       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:22.651474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:22.655547       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:24.658990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:24.664619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:26.667857       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:26.672213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:28.675062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:28.679354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:30.682090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:30.686187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:32.689197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:32.693034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:34.696128       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:34.700339       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:36.704365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:36.708376       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:38.711518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:03:38.716047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-520775 -n default-k8s-diff-port-520775
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-520775 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-gws8j kubernetes-dashboard-855c9754f9-f6t79
helpers_test.go:282: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-520775 describe pod metrics-server-746fcd58dc-gws8j kubernetes-dashboard-855c9754f9-f6t79
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-520775 describe pod metrics-server-746fcd58dc-gws8j kubernetes-dashboard-855c9754f9-f6t79: exit status 1 (60.529872ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-gws8j" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-f6t79" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context default-k8s-diff-port-520775 describe pod metrics-server-746fcd58dc-gws8j kubernetes-dashboard-855c9754f9-f6t79: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (542.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-ctkhj" [191398b6-c62e-4c25-9bed-1fea30f5fed5] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-869290 -n old-k8s-version-869290
start_stop_delete_test.go:285: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-09-04 07:09:38.424992785 +0000 UTC m=+4169.214023414
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context old-k8s-version-869290 describe po kubernetes-dashboard-8694d4445c-ctkhj -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) kubectl --context old-k8s-version-869290 describe po kubernetes-dashboard-8694d4445c-ctkhj -n kubernetes-dashboard:
Name:             kubernetes-dashboard-8694d4445c-ctkhj
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             old-k8s-version-869290/192.168.76.2
Start Time:       Thu, 04 Sep 2025 06:51:08 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=8694d4445c
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/kubernetes-dashboard-8694d4445c
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d8fhm (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-d8fhm:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  18m                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-8694d4445c-ctkhj to old-k8s-version-869290
Warning  Failed     17m                   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": loading manifest for target platform: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    15m (x4 over 18m)     kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     15m (x4 over 17m)     kubelet            Error: ErrImagePull
Warning  Failed     14m (x6 over 17m)     kubelet            Error: ImagePullBackOff
Warning  Failed     13m (x4 over 17m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    3m24s (x49 over 17m)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context old-k8s-version-869290 logs kubernetes-dashboard-8694d4445c-ctkhj -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context old-k8s-version-869290 logs kubernetes-dashboard-8694d4445c-ctkhj -n kubernetes-dashboard: exit status 1 (73.621926ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-8694d4445c-ctkhj" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:285: kubectl --context old-k8s-version-869290 logs kubernetes-dashboard-8694d4445c-ctkhj -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-869290 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-869290
helpers_test.go:243: (dbg) docker inspect old-k8s-version-869290:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "206772efca5eed4f4fbcaa056239cf810dc6bebf5333becc0ff89e5c404db713",
	        "Created": "2025-09-04T06:49:35.46602092Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1771260,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-04T06:50:43.793377686Z",
	            "FinishedAt": "2025-09-04T06:50:43.068021983Z"
	        },
	        "Image": "sha256:6f7d8b3ae805e64eb4efe058a75d43d384fe5989473cee7f8e24ea90eca28309",
	        "ResolvConfPath": "/var/lib/docker/containers/206772efca5eed4f4fbcaa056239cf810dc6bebf5333becc0ff89e5c404db713/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/206772efca5eed4f4fbcaa056239cf810dc6bebf5333becc0ff89e5c404db713/hostname",
	        "HostsPath": "/var/lib/docker/containers/206772efca5eed4f4fbcaa056239cf810dc6bebf5333becc0ff89e5c404db713/hosts",
	        "LogPath": "/var/lib/docker/containers/206772efca5eed4f4fbcaa056239cf810dc6bebf5333becc0ff89e5c404db713/206772efca5eed4f4fbcaa056239cf810dc6bebf5333becc0ff89e5c404db713-json.log",
	        "Name": "/old-k8s-version-869290",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-869290:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-869290",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "206772efca5eed4f4fbcaa056239cf810dc6bebf5333becc0ff89e5c404db713",
	                "LowerDir": "/var/lib/docker/overlay2/70054fc1cd8315be99686a375dd5ad1c3d78f07ef6a4c2df95fc8ae6e1b848dd-init/diff:/var/lib/docker/overlay2/00af8677cb60c76ca825d07bd2d1267a5f0b2d8d1147a86a8eb7a1b8e0189af8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/70054fc1cd8315be99686a375dd5ad1c3d78f07ef6a4c2df95fc8ae6e1b848dd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/70054fc1cd8315be99686a375dd5ad1c3d78f07ef6a4c2df95fc8ae6e1b848dd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/70054fc1cd8315be99686a375dd5ad1c3d78f07ef6a4c2df95fc8ae6e1b848dd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-869290",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-869290/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-869290",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-869290",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-869290",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bfd9887ae856137840ff4089e7352aa402b336956352d94f420ad864129004d3",
	            "SandboxKey": "/var/run/docker/netns/bfd9887ae856",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34254"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34255"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34258"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34256"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34257"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-869290": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:ec:85:32:4a:1e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "66257fe74e8a729f876c63df282eb573f7ca67afcf17672f4f62529bc49d57cd",
	                    "EndpointID": "b701321bbecfc061764b1cea2e9550663e6d4b42a47d0062268de3841999df69",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-869290",
	                        "206772efca5e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-869290 -n old-k8s-version-869290
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-869290 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-869290 logs -n 25: (1.210899916s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p no-preload-574576 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                       │ no-preload-574576            │ jenkins │ v1.36.0 │ 04 Sep 25 06:49 UTC │ 04 Sep 25 06:50 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-869290 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-869290       │ jenkins │ v1.36.0 │ 04 Sep 25 06:50 UTC │ 04 Sep 25 06:50 UTC │
	│ stop    │ -p old-k8s-version-869290 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-869290       │ jenkins │ v1.36.0 │ 04 Sep 25 06:50 UTC │ 04 Sep 25 06:50 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-869290 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-869290       │ jenkins │ v1.36.0 │ 04 Sep 25 06:50 UTC │ 04 Sep 25 06:50 UTC │
	│ start   │ -p old-k8s-version-869290 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-869290       │ jenkins │ v1.36.0 │ 04 Sep 25 06:50 UTC │ 04 Sep 25 06:51 UTC │
	│ addons  │ enable metrics-server -p no-preload-574576 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-574576            │ jenkins │ v1.36.0 │ 04 Sep 25 06:50 UTC │ 04 Sep 25 06:50 UTC │
	│ stop    │ -p no-preload-574576 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-574576            │ jenkins │ v1.36.0 │ 04 Sep 25 06:50 UTC │ 04 Sep 25 06:51 UTC │
	│ addons  │ enable dashboard -p no-preload-574576 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-574576            │ jenkins │ v1.36.0 │ 04 Sep 25 06:51 UTC │ 04 Sep 25 06:51 UTC │
	│ start   │ -p no-preload-574576 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                       │ no-preload-574576            │ jenkins │ v1.36.0 │ 04 Sep 25 06:51 UTC │ 04 Sep 25 06:51 UTC │
	│ start   │ -p cert-expiration-620042 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-620042       │ jenkins │ v1.36.0 │ 04 Sep 25 06:51 UTC │ 04 Sep 25 06:51 UTC │
	│ delete  │ -p cert-expiration-620042                                                                                                                                                                                                                     │ cert-expiration-620042       │ jenkins │ v1.36.0 │ 04 Sep 25 06:51 UTC │ 04 Sep 25 06:52 UTC │
	│ start   │ -p embed-certs-589812 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                        │ embed-certs-589812           │ jenkins │ v1.36.0 │ 04 Sep 25 06:52 UTC │ 04 Sep 25 06:53 UTC │
	│ start   │ -p kubernetes-upgrade-892549 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-892549    │ jenkins │ v1.36.0 │ 04 Sep 25 06:52 UTC │                     │
	│ start   │ -p kubernetes-upgrade-892549 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-892549    │ jenkins │ v1.36.0 │ 04 Sep 25 06:52 UTC │ 04 Sep 25 06:52 UTC │
	│ delete  │ -p kubernetes-upgrade-892549                                                                                                                                                                                                                  │ kubernetes-upgrade-892549    │ jenkins │ v1.36.0 │ 04 Sep 25 06:52 UTC │ 04 Sep 25 06:52 UTC │
	│ delete  │ -p disable-driver-mounts-393542                                                                                                                                                                                                               │ disable-driver-mounts-393542 │ jenkins │ v1.36.0 │ 04 Sep 25 06:52 UTC │ 04 Sep 25 06:52 UTC │
	│ start   │ -p default-k8s-diff-port-520775 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-520775 │ jenkins │ v1.36.0 │ 04 Sep 25 06:52 UTC │ 04 Sep 25 06:53 UTC │
	│ addons  │ enable metrics-server -p embed-certs-589812 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-589812           │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:53 UTC │
	│ stop    │ -p embed-certs-589812 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-589812           │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:53 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-520775 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-520775 │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:53 UTC │
	│ stop    │ -p default-k8s-diff-port-520775 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-520775 │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:53 UTC │
	│ addons  │ enable dashboard -p embed-certs-589812 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-589812           │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:53 UTC │
	│ start   │ -p embed-certs-589812 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                        │ embed-certs-589812           │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:54 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-520775 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-520775 │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:53 UTC │
	│ start   │ -p default-k8s-diff-port-520775 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-520775 │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:54 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/04 06:53:49
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 06:53:49.418555 1796928 out.go:360] Setting OutFile to fd 1 ...
	I0904 06:53:49.418725 1796928 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:53:49.418774 1796928 out.go:374] Setting ErrFile to fd 2...
	I0904 06:53:49.418785 1796928 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:53:49.419117 1796928 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1516970/.minikube/bin
	I0904 06:53:49.419985 1796928 out.go:368] Setting JSON to false
	I0904 06:53:49.421632 1796928 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":16579,"bootTime":1756952250,"procs":289,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 06:53:49.421749 1796928 start.go:140] virtualization: kvm guest
	I0904 06:53:49.423972 1796928 out.go:179] * [default-k8s-diff-port-520775] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 06:53:49.425842 1796928 notify.go:220] Checking for updates...
	I0904 06:53:49.425850 1796928 out.go:179]   - MINIKUBE_LOCATION=21409
	I0904 06:53:49.427436 1796928 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 06:53:49.428783 1796928 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-1516970/kubeconfig
	I0904 06:53:49.429989 1796928 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1516970/.minikube
	I0904 06:53:49.431134 1796928 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 06:53:49.432406 1796928 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 06:53:49.434250 1796928 config.go:182] Loaded profile config "default-k8s-diff-port-520775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 06:53:49.435089 1796928 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 06:53:49.462481 1796928 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0904 06:53:49.462577 1796928 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 06:53:49.536244 1796928 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:74 SystemTime:2025-09-04 06:53:49.525128821 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 06:53:49.536390 1796928 docker.go:318] overlay module found
	I0904 06:53:49.539526 1796928 out.go:179] * Using the docker driver based on existing profile
	I0904 06:53:49.540719 1796928 start.go:304] selected driver: docker
	I0904 06:53:49.540734 1796928 start.go:918] validating driver "docker" against &{Name:default-k8s-diff-port-520775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-520775 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountS
tring: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 06:53:49.540822 1796928 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 06:53:49.541681 1796928 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 06:53:49.594566 1796928 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:74 SystemTime:2025-09-04 06:53:49.585030944 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 06:53:49.595064 1796928 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 06:53:49.595111 1796928 cni.go:84] Creating CNI manager for ""
	I0904 06:53:49.595174 1796928 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0904 06:53:49.595223 1796928 start.go:348] cluster config:
	{Name:default-k8s-diff-port-520775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-520775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 06:53:49.597216 1796928 out.go:179] * Starting "default-k8s-diff-port-520775" primary control-plane node in "default-k8s-diff-port-520775" cluster
	I0904 06:53:49.598401 1796928 cache.go:123] Beginning downloading kic base image for docker with crio
	I0904 06:53:49.599526 1796928 out.go:179] * Pulling base image v0.0.47-1756936034-21409 ...
	I0904 06:53:49.604882 1796928 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 06:53:49.604957 1796928 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21409-1516970/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0904 06:53:49.604977 1796928 cache.go:58] Caching tarball of preloaded images
	I0904 06:53:49.604992 1796928 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc in local docker daemon
	I0904 06:53:49.605104 1796928 preload.go:172] Found /home/jenkins/minikube-integration/21409-1516970/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0904 06:53:49.605123 1796928 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0904 06:53:49.605341 1796928 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/config.json ...
	I0904 06:53:49.637613 1796928 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc in local docker daemon, skipping pull
	I0904 06:53:49.637635 1796928 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc exists in daemon, skipping load
	I0904 06:53:49.637647 1796928 cache.go:232] Successfully downloaded all kic artifacts
	I0904 06:53:49.637673 1796928 start.go:360] acquireMachinesLock for default-k8s-diff-port-520775: {Name:mkd2b36988a85f8d5c3a19497a99007da8aadae2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 06:53:49.637729 1796928 start.go:364] duration metric: took 33.006µs to acquireMachinesLock for "default-k8s-diff-port-520775"
	I0904 06:53:49.637749 1796928 start.go:96] Skipping create...Using existing machine configuration
	I0904 06:53:49.637756 1796928 fix.go:54] fixHost starting: 
	I0904 06:53:49.637963 1796928 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-520775 --format={{.State.Status}}
	I0904 06:53:49.656941 1796928 fix.go:112] recreateIfNeeded on default-k8s-diff-port-520775: state=Stopped err=<nil>
	W0904 06:53:49.656986 1796928 fix.go:138] unexpected machine state, will restart: <nil>
	I0904 06:53:49.524554 1794879 node_ready.go:49] node "embed-certs-589812" is "Ready"
	I0904 06:53:49.524655 1794879 node_ready.go:38] duration metric: took 3.407781482s for node "embed-certs-589812" to be "Ready" ...
	I0904 06:53:49.524688 1794879 api_server.go:52] waiting for apiserver process to appear ...
	I0904 06:53:49.524773 1794879 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 06:53:51.714274 1794879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.110482825s)
	I0904 06:53:51.714323 1794879 addons.go:479] Verifying addon metrics-server=true in "embed-certs-589812"
	I0904 06:53:51.714427 1794879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.971633666s)
	I0904 06:53:51.714457 1794879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (4.901617894s)
	I0904 06:53:51.714590 1794879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.702133151s)
	I0904 06:53:51.714600 1794879 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.189780106s)
	I0904 06:53:51.714619 1794879 api_server.go:72] duration metric: took 5.87883589s to wait for apiserver process to appear ...
	I0904 06:53:51.714626 1794879 api_server.go:88] waiting for apiserver healthz status ...
	I0904 06:53:51.714643 1794879 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0904 06:53:51.716342 1794879 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-589812 addons enable metrics-server
	
	I0904 06:53:51.722283 1794879 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 06:53:51.722308 1794879 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 06:53:51.730360 1794879 out.go:179] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	I0904 06:53:51.731942 1794879 addons.go:514] duration metric: took 5.89615636s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I0904 06:53:52.215034 1794879 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0904 06:53:52.219745 1794879 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 06:53:52.219786 1794879 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 06:53:52.715125 1794879 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0904 06:53:52.719686 1794879 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 06:53:52.719714 1794879 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 06:53:53.215303 1794879 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0904 06:53:53.219535 1794879 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0904 06:53:53.220593 1794879 api_server.go:141] control plane version: v1.34.0
	I0904 06:53:53.220626 1794879 api_server.go:131] duration metric: took 1.505992813s to wait for apiserver health ...
	I0904 06:53:53.220641 1794879 system_pods.go:43] waiting for kube-system pods to appear ...
	I0904 06:53:53.224544 1794879 system_pods.go:59] 9 kube-system pods found
	I0904 06:53:53.224588 1794879 system_pods.go:61] "coredns-66bc5c9577-j5gww" [e3612616-edf7-408c-8d20-966c456e4a43] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 06:53:53.224605 1794879 system_pods.go:61] "etcd-embed-certs-589812" [ffde7899-36bf-4837-8a40-30b11624fd40] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0904 06:53:53.224618 1794879 system_pods.go:61] "kindnet-wtgxv" [7570cefc-495d-4c68-83e5-04a04d12775a] Running
	I0904 06:53:53.224628 1794879 system_pods.go:61] "kube-apiserver-embed-certs-589812" [095a13f2-431a-46bd-a6b2-d9f475bd60cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 06:53:53.224640 1794879 system_pods.go:61] "kube-controller-manager-embed-certs-589812" [25e8105c-95a2-4761-a9a6-3e01225cde8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0904 06:53:53.224650 1794879 system_pods.go:61] "kube-proxy-xqvlx" [281c6535-72f3-429b-b4b1-df56cb3de2f5] Running
	I0904 06:53:53.224659 1794879 system_pods.go:61] "kube-scheduler-embed-certs-589812" [dbb61597-bbca-422b-b8b6-45821409cb91] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0904 06:53:53.224682 1794879 system_pods.go:61] "metrics-server-746fcd58dc-prlxr" [58b70501-6011-4b99-80ff-1f9b422ae481] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 06:53:53.224694 1794879 system_pods.go:61] "storage-provisioner" [df8bd0bd-3bd4-461e-b276-edf75af8897e] Running
	I0904 06:53:53.224704 1794879 system_pods.go:74] duration metric: took 4.053609ms to wait for pod list to return data ...
	I0904 06:53:53.224716 1794879 default_sa.go:34] waiting for default service account to be created ...
	I0904 06:53:53.227290 1794879 default_sa.go:45] found service account: "default"
	I0904 06:53:53.227311 1794879 default_sa.go:55] duration metric: took 2.585826ms for default service account to be created ...
	I0904 06:53:53.227319 1794879 system_pods.go:116] waiting for k8s-apps to be running ...
	I0904 06:53:53.230112 1794879 system_pods.go:86] 9 kube-system pods found
	I0904 06:53:53.230142 1794879 system_pods.go:89] "coredns-66bc5c9577-j5gww" [e3612616-edf7-408c-8d20-966c456e4a43] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 06:53:53.230154 1794879 system_pods.go:89] "etcd-embed-certs-589812" [ffde7899-36bf-4837-8a40-30b11624fd40] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0904 06:53:53.230162 1794879 system_pods.go:89] "kindnet-wtgxv" [7570cefc-495d-4c68-83e5-04a04d12775a] Running
	I0904 06:53:53.230172 1794879 system_pods.go:89] "kube-apiserver-embed-certs-589812" [095a13f2-431a-46bd-a6b2-d9f475bd60cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 06:53:53.230180 1794879 system_pods.go:89] "kube-controller-manager-embed-certs-589812" [25e8105c-95a2-4761-a9a6-3e01225cde8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0904 06:53:53.230191 1794879 system_pods.go:89] "kube-proxy-xqvlx" [281c6535-72f3-429b-b4b1-df56cb3de2f5] Running
	I0904 06:53:53.230201 1794879 system_pods.go:89] "kube-scheduler-embed-certs-589812" [dbb61597-bbca-422b-b8b6-45821409cb91] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0904 06:53:53.230212 1794879 system_pods.go:89] "metrics-server-746fcd58dc-prlxr" [58b70501-6011-4b99-80ff-1f9b422ae481] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 06:53:53.230218 1794879 system_pods.go:89] "storage-provisioner" [df8bd0bd-3bd4-461e-b276-edf75af8897e] Running
	I0904 06:53:53.230227 1794879 system_pods.go:126] duration metric: took 2.90283ms to wait for k8s-apps to be running ...
	I0904 06:53:53.230240 1794879 system_svc.go:44] waiting for kubelet service to be running ....
	I0904 06:53:53.230287 1794879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 06:53:53.241829 1794879 system_svc.go:56] duration metric: took 11.584133ms WaitForService to wait for kubelet
	I0904 06:53:53.241853 1794879 kubeadm.go:578] duration metric: took 7.406070053s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 06:53:53.241869 1794879 node_conditions.go:102] verifying NodePressure condition ...
	I0904 06:53:53.244406 1794879 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0904 06:53:53.244445 1794879 node_conditions.go:123] node cpu capacity is 8
	I0904 06:53:53.244459 1794879 node_conditions.go:105] duration metric: took 2.584951ms to run NodePressure ...
	I0904 06:53:53.244478 1794879 start.go:241] waiting for startup goroutines ...
	I0904 06:53:53.244492 1794879 start.go:246] waiting for cluster config update ...
	I0904 06:53:53.244509 1794879 start.go:255] writing updated cluster config ...
	I0904 06:53:53.244784 1794879 ssh_runner.go:195] Run: rm -f paused
	I0904 06:53:53.248131 1794879 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 06:53:53.251511 1794879 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-j5gww" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:53:49.659280 1796928 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-520775" ...
	I0904 06:53:49.659366 1796928 cli_runner.go:164] Run: docker start default-k8s-diff-port-520775
	I0904 06:53:49.944765 1796928 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-520775 --format={{.State.Status}}
	I0904 06:53:49.965484 1796928 kic.go:430] container "default-k8s-diff-port-520775" state is running.
	I0904 06:53:49.965966 1796928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-520775
	I0904 06:53:49.984536 1796928 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/config.json ...
	I0904 06:53:49.984754 1796928 machine.go:93] provisionDockerMachine start ...
	I0904 06:53:49.984828 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:50.006739 1796928 main.go:141] libmachine: Using SSH client type: native
	I0904 06:53:50.007122 1796928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 34279 <nil> <nil>}
	I0904 06:53:50.007149 1796928 main.go:141] libmachine: About to run SSH command:
	hostname
	I0904 06:53:50.011282 1796928 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0904 06:53:53.135459 1796928 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-520775
	
	I0904 06:53:53.135490 1796928 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-520775"
	I0904 06:53:53.135560 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:53.153046 1796928 main.go:141] libmachine: Using SSH client type: native
	I0904 06:53:53.153307 1796928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 34279 <nil> <nil>}
	I0904 06:53:53.153323 1796928 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-520775 && echo "default-k8s-diff-port-520775" | sudo tee /etc/hostname
	I0904 06:53:53.284177 1796928 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-520775
	
	I0904 06:53:53.284278 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:53.302854 1796928 main.go:141] libmachine: Using SSH client type: native
	I0904 06:53:53.303062 1796928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 34279 <nil> <nil>}
	I0904 06:53:53.303082 1796928 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-520775' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-520775/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-520775' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0904 06:53:53.428269 1796928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 06:53:53.428306 1796928 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-1516970/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-1516970/.minikube}
	I0904 06:53:53.428357 1796928 ubuntu.go:190] setting up certificates
	I0904 06:53:53.428381 1796928 provision.go:84] configureAuth start
	I0904 06:53:53.428449 1796928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-520775
	I0904 06:53:53.447935 1796928 provision.go:143] copyHostCerts
	I0904 06:53:53.448036 1796928 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.pem, removing ...
	I0904 06:53:53.448051 1796928 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.pem
	I0904 06:53:53.448113 1796928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.pem (1082 bytes)
	I0904 06:53:53.448215 1796928 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1516970/.minikube/cert.pem, removing ...
	I0904 06:53:53.448223 1796928 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1516970/.minikube/cert.pem
	I0904 06:53:53.448247 1796928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-1516970/.minikube/cert.pem (1123 bytes)
	I0904 06:53:53.448320 1796928 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1516970/.minikube/key.pem, removing ...
	I0904 06:53:53.448326 1796928 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1516970/.minikube/key.pem
	I0904 06:53:53.448347 1796928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-1516970/.minikube/key.pem (1675 bytes)
	I0904 06:53:53.448409 1796928 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-520775 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-520775 localhost minikube]
	I0904 06:53:53.540900 1796928 provision.go:177] copyRemoteCerts
	I0904 06:53:53.540966 1796928 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0904 06:53:53.541003 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:53.558727 1796928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/default-k8s-diff-port-520775/id_rsa Username:docker}
	I0904 06:53:53.650335 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0904 06:53:53.677813 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0904 06:53:53.700987 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0904 06:53:53.724318 1796928 provision.go:87] duration metric: took 295.918548ms to configureAuth
	I0904 06:53:53.724345 1796928 ubuntu.go:206] setting minikube options for container-runtime
	I0904 06:53:53.724529 1796928 config.go:182] Loaded profile config "default-k8s-diff-port-520775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 06:53:53.724626 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:53.743241 1796928 main.go:141] libmachine: Using SSH client type: native
	I0904 06:53:53.743467 1796928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 34279 <nil> <nil>}
	I0904 06:53:53.743488 1796928 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0904 06:53:54.045106 1796928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0904 06:53:54.045134 1796928 machine.go:96] duration metric: took 4.060362432s to provisionDockerMachine
	I0904 06:53:54.045148 1796928 start.go:293] postStartSetup for "default-k8s-diff-port-520775" (driver="docker")
	I0904 06:53:54.045187 1796928 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0904 06:53:54.045256 1796928 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0904 06:53:54.045307 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:54.064198 1796928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/default-k8s-diff-port-520775/id_rsa Username:docker}
	I0904 06:53:54.152873 1796928 ssh_runner.go:195] Run: cat /etc/os-release
	I0904 06:53:54.156293 1796928 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0904 06:53:54.156319 1796928 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0904 06:53:54.156326 1796928 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0904 06:53:54.156333 1796928 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0904 06:53:54.156345 1796928 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1516970/.minikube/addons for local assets ...
	I0904 06:53:54.156399 1796928 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1516970/.minikube/files for local assets ...
	I0904 06:53:54.156481 1796928 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-1516970/.minikube/files/etc/ssl/certs/15207162.pem -> 15207162.pem in /etc/ssl/certs
	I0904 06:53:54.156610 1796928 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0904 06:53:54.165073 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/files/etc/ssl/certs/15207162.pem --> /etc/ssl/certs/15207162.pem (1708 bytes)
	I0904 06:53:54.187780 1796928 start.go:296] duration metric: took 142.614938ms for postStartSetup
	I0904 06:53:54.187887 1796928 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 06:53:54.187937 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:54.205683 1796928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/default-k8s-diff-port-520775/id_rsa Username:docker}
	I0904 06:53:54.292859 1796928 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0904 06:53:54.297265 1796928 fix.go:56] duration metric: took 4.65950064s for fixHost
	I0904 06:53:54.297289 1796928 start.go:83] releasing machines lock for "default-k8s-diff-port-520775", held for 4.659549727s
	I0904 06:53:54.297358 1796928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-520775
	I0904 06:53:54.315327 1796928 ssh_runner.go:195] Run: cat /version.json
	I0904 06:53:54.315393 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:54.315420 1796928 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0904 06:53:54.315484 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:54.335338 1796928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/default-k8s-diff-port-520775/id_rsa Username:docker}
	I0904 06:53:54.336109 1796928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/default-k8s-diff-port-520775/id_rsa Username:docker}
	I0904 06:53:54.493584 1796928 ssh_runner.go:195] Run: systemctl --version
	I0904 06:53:54.498345 1796928 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0904 06:53:54.638467 1796928 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0904 06:53:54.642924 1796928 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 06:53:54.652284 1796928 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0904 06:53:54.652347 1796928 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 06:53:54.660849 1796928 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0904 06:53:54.660875 1796928 start.go:495] detecting cgroup driver to use...
	I0904 06:53:54.660913 1796928 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0904 06:53:54.660966 1796928 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0904 06:53:54.672418 1796928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 06:53:54.683134 1796928 docker.go:218] disabling cri-docker service (if available) ...
	I0904 06:53:54.683181 1796928 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0904 06:53:54.695400 1796928 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0904 06:53:54.706646 1796928 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0904 06:53:54.793740 1796928 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0904 06:53:54.873854 1796928 docker.go:234] disabling docker service ...
	I0904 06:53:54.873933 1796928 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0904 06:53:54.885885 1796928 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0904 06:53:54.896737 1796928 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0904 06:53:54.980788 1796928 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0904 06:53:55.057730 1796928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0904 06:53:55.068310 1796928 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 06:53:55.083683 1796928 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0904 06:53:55.083736 1796928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:53:55.093158 1796928 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0904 06:53:55.093215 1796928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:53:55.102672 1796928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:53:55.113082 1796928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:53:55.122399 1796928 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0904 06:53:55.131334 1796928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:53:55.140602 1796928 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:53:55.150009 1796928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:53:55.159908 1796928 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0904 06:53:55.167649 1796928 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0904 06:53:55.175680 1796928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 06:53:55.254239 1796928 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0904 06:53:55.362926 1796928 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0904 06:53:55.363001 1796928 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0904 06:53:55.366648 1796928 start.go:563] Will wait 60s for crictl version
	I0904 06:53:55.366695 1796928 ssh_runner.go:195] Run: which crictl
	I0904 06:53:55.369962 1796928 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0904 06:53:55.403453 1796928 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0904 06:53:55.403538 1796928 ssh_runner.go:195] Run: crio --version
	I0904 06:53:55.441474 1796928 ssh_runner.go:195] Run: crio --version
	I0904 06:53:55.479608 1796928 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0904 06:53:55.480915 1796928 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-520775 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0904 06:53:55.497935 1796928 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0904 06:53:55.502150 1796928 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 06:53:55.514295 1796928 kubeadm.go:875] updating cluster {Name:default-k8s-diff-port-520775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-520775 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0904 06:53:55.514485 1796928 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 06:53:55.514556 1796928 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 06:53:55.564218 1796928 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 06:53:55.564245 1796928 crio.go:433] Images already preloaded, skipping extraction
	I0904 06:53:55.564292 1796928 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 06:53:55.602409 1796928 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 06:53:55.602436 1796928 cache_images.go:85] Images are preloaded, skipping loading
	I0904 06:53:55.602446 1796928 kubeadm.go:926] updating node { 192.168.103.2 8444 v1.34.0 crio true true} ...
	I0904 06:53:55.602577 1796928 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-520775 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-520775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0904 06:53:55.602645 1796928 ssh_runner.go:195] Run: crio config
	I0904 06:53:55.664543 1796928 cni.go:84] Creating CNI manager for ""
	I0904 06:53:55.664570 1796928 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0904 06:53:55.664584 1796928 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0904 06:53:55.664612 1796928 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-520775 NodeName:default-k8s-diff-port-520775 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0904 06:53:55.664768 1796928 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-520775"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0904 06:53:55.664845 1796928 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0904 06:53:55.673590 1796928 binaries.go:44] Found k8s binaries, skipping transfer
	I0904 06:53:55.673661 1796928 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0904 06:53:55.682016 1796928 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0904 06:53:55.699448 1796928 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0904 06:53:55.717472 1796928 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I0904 06:53:55.734579 1796928 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0904 06:53:55.737941 1796928 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 06:53:55.748899 1796928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 06:53:55.834506 1796928 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 06:53:55.848002 1796928 certs.go:68] Setting up /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775 for IP: 192.168.103.2
	I0904 06:53:55.848028 1796928 certs.go:194] generating shared ca certs ...
	I0904 06:53:55.848048 1796928 certs.go:226] acquiring lock for ca certs: {Name:mk2d06825c36f44304767b415a9a93c84edb2667 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:53:55.848186 1796928 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.key
	I0904 06:53:55.848228 1796928 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/proxy-client-ca.key
	I0904 06:53:55.848237 1796928 certs.go:256] generating profile certs ...
	I0904 06:53:55.848310 1796928 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/client.key
	I0904 06:53:55.848365 1796928 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/apiserver.key.6ec15110
	I0904 06:53:55.848406 1796928 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/proxy-client.key
	I0904 06:53:55.848517 1796928 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/1520716.pem (1338 bytes)
	W0904 06:53:55.848547 1796928 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/1520716_empty.pem, impossibly tiny 0 bytes
	I0904 06:53:55.848556 1796928 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca-key.pem (1675 bytes)
	I0904 06:53:55.848578 1796928 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca.pem (1082 bytes)
	I0904 06:53:55.848601 1796928 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/cert.pem (1123 bytes)
	I0904 06:53:55.848627 1796928 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/key.pem (1675 bytes)
	I0904 06:53:55.848669 1796928 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/files/etc/ssl/certs/15207162.pem (1708 bytes)
	I0904 06:53:55.849251 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0904 06:53:55.876639 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0904 06:53:55.904012 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0904 06:53:55.936371 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0904 06:53:56.018233 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0904 06:53:56.041340 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0904 06:53:56.065911 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0904 06:53:56.089737 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0904 06:53:56.112935 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/files/etc/ssl/certs/15207162.pem --> /usr/share/ca-certificates/15207162.pem (1708 bytes)
	I0904 06:53:56.138060 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0904 06:53:56.162385 1796928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/1520716.pem --> /usr/share/ca-certificates/1520716.pem (1338 bytes)
	I0904 06:53:56.185546 1796928 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0904 06:53:56.202891 1796928 ssh_runner.go:195] Run: openssl version
	I0904 06:53:56.208611 1796928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15207162.pem && ln -fs /usr/share/ca-certificates/15207162.pem /etc/ssl/certs/15207162.pem"
	I0904 06:53:56.219865 1796928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15207162.pem
	I0904 06:53:56.223785 1796928 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  4 06:07 /usr/share/ca-certificates/15207162.pem
	I0904 06:53:56.223867 1796928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15207162.pem
	I0904 06:53:56.231657 1796928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15207162.pem /etc/ssl/certs/3ec20f2e.0"
	I0904 06:53:56.243527 1796928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0904 06:53:56.253334 1796928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0904 06:53:56.257449 1796928 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  4 06:00 /usr/share/ca-certificates/minikubeCA.pem
	I0904 06:53:56.257517 1796928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0904 06:53:56.264253 1796928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0904 06:53:56.273629 1796928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1520716.pem && ln -fs /usr/share/ca-certificates/1520716.pem /etc/ssl/certs/1520716.pem"
	I0904 06:53:56.283120 1796928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1520716.pem
	I0904 06:53:56.286378 1796928 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  4 06:07 /usr/share/ca-certificates/1520716.pem
	I0904 06:53:56.286450 1796928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1520716.pem
	I0904 06:53:56.293207 1796928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1520716.pem /etc/ssl/certs/51391683.0"
	I0904 06:53:56.301668 1796928 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0904 06:53:56.308006 1796928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0904 06:53:56.315155 1796928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0904 06:53:56.322059 1796928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0904 06:53:56.329568 1796928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0904 06:53:56.337737 1796928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0904 06:53:56.345511 1796928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0904 06:53:56.353351 1796928 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-520775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-520775 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 06:53:56.353482 1796928 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0904 06:53:56.353539 1796928 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0904 06:53:56.397941 1796928 cri.go:89] found id: ""
	I0904 06:53:56.398012 1796928 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0904 06:53:56.408886 1796928 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0904 06:53:56.408981 1796928 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0904 06:53:56.409041 1796928 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0904 06:53:56.424530 1796928 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0904 06:53:56.425727 1796928 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-520775" does not appear in /home/jenkins/minikube-integration/21409-1516970/kubeconfig
	I0904 06:53:56.426580 1796928 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-1516970/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-520775" cluster setting kubeconfig missing "default-k8s-diff-port-520775" context setting]
	I0904 06:53:56.427949 1796928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1516970/kubeconfig: {Name:mkbb6a6dae4a65ddd44b276a5562dc0a264116a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:53:56.430031 1796928 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0904 06:53:56.444430 1796928 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.103.2
	I0904 06:53:56.444470 1796928 kubeadm.go:593] duration metric: took 35.478353ms to restartPrimaryControlPlane
	I0904 06:53:56.444481 1796928 kubeadm.go:394] duration metric: took 91.143305ms to StartCluster
	I0904 06:53:56.444503 1796928 settings.go:142] acquiring lock: {Name:mk2d1c8a569b62879275d6daa2b799b595d6bca2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:53:56.444560 1796928 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-1516970/kubeconfig
	I0904 06:53:56.447245 1796928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1516970/kubeconfig: {Name:mkbb6a6dae4a65ddd44b276a5562dc0a264116a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:53:56.447495 1796928 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0904 06:53:56.447711 1796928 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0904 06:53:56.447836 1796928 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-520775"
	I0904 06:53:56.447860 1796928 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-520775"
	W0904 06:53:56.447868 1796928 addons.go:247] addon storage-provisioner should already be in state true
	I0904 06:53:56.447888 1796928 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-520775"
	I0904 06:53:56.447903 1796928 host.go:66] Checking if "default-k8s-diff-port-520775" exists ...
	I0904 06:53:56.447928 1796928 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-520775"
	I0904 06:53:56.447921 1796928 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-520775"
	I0904 06:53:56.447939 1796928 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-520775"
	I0904 06:53:56.447970 1796928 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-520775"
	I0904 06:53:56.447970 1796928 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-520775"
	W0904 06:53:56.447979 1796928 addons.go:247] addon dashboard should already be in state true
	I0904 06:53:56.447980 1796928 config.go:182] Loaded profile config "default-k8s-diff-port-520775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	W0904 06:53:56.447982 1796928 addons.go:247] addon metrics-server should already be in state true
	I0904 06:53:56.448017 1796928 host.go:66] Checking if "default-k8s-diff-port-520775" exists ...
	I0904 06:53:56.448020 1796928 host.go:66] Checking if "default-k8s-diff-port-520775" exists ...
	I0904 06:53:56.448276 1796928 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-520775 --format={{.State.Status}}
	I0904 06:53:56.448431 1796928 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-520775 --format={{.State.Status}}
	I0904 06:53:56.448473 1796928 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-520775 --format={{.State.Status}}
	I0904 06:53:56.448520 1796928 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-520775 --format={{.State.Status}}
	I0904 06:53:56.450093 1796928 out.go:179] * Verifying Kubernetes components...
	I0904 06:53:56.451389 1796928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 06:53:56.482390 1796928 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-520775"
	W0904 06:53:56.482412 1796928 addons.go:247] addon default-storageclass should already be in state true
	I0904 06:53:56.482437 1796928 host.go:66] Checking if "default-k8s-diff-port-520775" exists ...
	I0904 06:53:56.482730 1796928 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-520775 --format={{.State.Status}}
	I0904 06:53:56.485071 1796928 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 06:53:56.485089 1796928 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0904 06:53:56.488270 1796928 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 06:53:56.488294 1796928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0904 06:53:56.488355 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:56.490382 1796928 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0904 06:53:56.491521 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0904 06:53:56.491536 1796928 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0904 06:53:56.491584 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:56.496773 1796928 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	W0904 06:53:55.257485 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:53:57.757496 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	I0904 06:53:56.497920 1796928 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0904 06:53:56.497941 1796928 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0904 06:53:56.498005 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:56.511983 1796928 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0904 06:53:56.512010 1796928 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0904 06:53:56.512072 1796928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-520775
	I0904 06:53:56.529596 1796928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/default-k8s-diff-port-520775/id_rsa Username:docker}
	I0904 06:53:56.531423 1796928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/default-k8s-diff-port-520775/id_rsa Username:docker}
	I0904 06:53:56.543761 1796928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/default-k8s-diff-port-520775/id_rsa Username:docker}
	I0904 06:53:56.547939 1796928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/default-k8s-diff-port-520775/id_rsa Username:docker}
	I0904 06:53:56.815518 1796928 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 06:53:56.824564 1796928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 06:53:56.900475 1796928 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-520775" to be "Ready" ...
	I0904 06:53:56.903122 1796928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0904 06:53:56.915401 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0904 06:53:56.915439 1796928 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0904 06:53:57.011674 1796928 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0904 06:53:57.011705 1796928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0904 06:53:57.025890 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0904 06:53:57.025929 1796928 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0904 06:53:57.130640 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0904 06:53:57.130669 1796928 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0904 06:53:57.201935 1796928 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0904 06:53:57.201971 1796928 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	W0904 06:53:57.228446 1796928 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 06:53:57.228496 1796928 retry.go:31] will retry after 331.542893ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0904 06:53:57.228576 1796928 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 06:53:57.228595 1796928 retry.go:31] will retry after 234.661911ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 06:53:57.233201 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0904 06:53:57.233235 1796928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0904 06:53:57.312449 1796928 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0904 06:53:57.312483 1796928 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0904 06:53:57.335196 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0904 06:53:57.335296 1796928 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0904 06:53:57.340794 1796928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0904 06:53:57.423747 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0904 06:53:57.423855 1796928 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0904 06:53:57.464378 1796928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0904 06:53:57.517739 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0904 06:53:57.517836 1796928 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0904 06:53:57.560380 1796928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 06:53:57.621494 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0904 06:53:57.621580 1796928 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0904 06:53:57.719817 1796928 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0904 06:53:57.719851 1796928 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0904 06:53:57.808921 1796928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0904 06:54:00.222294 1796928 node_ready.go:49] node "default-k8s-diff-port-520775" is "Ready"
	I0904 06:54:00.222393 1796928 node_ready.go:38] duration metric: took 3.321861305s for node "default-k8s-diff-port-520775" to be "Ready" ...
	I0904 06:54:00.222414 1796928 api_server.go:52] waiting for apiserver process to appear ...
	I0904 06:54:00.222514 1796928 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 06:54:02.420531 1796928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.07964965s)
	I0904 06:54:02.420574 1796928 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-520775"
	I0904 06:54:02.420586 1796928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (4.956118872s)
	I0904 06:54:02.420682 1796928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.860244874s)
	I0904 06:54:02.420925 1796928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.611964012s)
	I0904 06:54:02.420956 1796928 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.198413181s)
	I0904 06:54:02.421147 1796928 api_server.go:72] duration metric: took 5.973615373s to wait for apiserver process to appear ...
	I0904 06:54:02.421161 1796928 api_server.go:88] waiting for apiserver healthz status ...
	I0904 06:54:02.421181 1796928 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I0904 06:54:02.422911 1796928 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-520775 addons enable metrics-server
	
	I0904 06:54:02.426397 1796928 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 06:54:02.426463 1796928 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 06:54:02.428576 1796928 out.go:179] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	W0904 06:53:59.759069 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:54:02.258100 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	I0904 06:54:02.429861 1796928 addons.go:514] duration metric: took 5.982154586s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I0904 06:54:02.921448 1796928 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I0904 06:54:02.926218 1796928 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 06:54:02.926239 1796928 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 06:54:03.421924 1796928 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I0904 06:54:03.427035 1796928 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I0904 06:54:03.428103 1796928 api_server.go:141] control plane version: v1.34.0
	I0904 06:54:03.428127 1796928 api_server.go:131] duration metric: took 1.006959868s to wait for apiserver health ...
	I0904 06:54:03.428136 1796928 system_pods.go:43] waiting for kube-system pods to appear ...
	I0904 06:54:03.434471 1796928 system_pods.go:59] 9 kube-system pods found
	I0904 06:54:03.434508 1796928 system_pods.go:61] "coredns-66bc5c9577-hm47q" [e73fad8a-ad1b-475f-a4ea-bfda49587ae8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 06:54:03.434519 1796928 system_pods.go:61] "etcd-default-k8s-diff-port-520775" [5829ac4b-ff8b-4d46-9be9-0947be850651] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0904 06:54:03.434525 1796928 system_pods.go:61] "kindnet-wz7lg" [8e231614-2126-4bd8-b77d-a4e98bfbcd0b] Running
	I0904 06:54:03.434533 1796928 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-520775" [95d6a6b9-81f2-48b3-8343-289600b99b2a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 06:54:03.434544 1796928 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-520775" [69053048-8fce-4b4b-8df8-a8f7415bf602] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0904 06:54:03.434564 1796928 system_pods.go:61] "kube-proxy-zrlrh" [df5878ee-bf16-4a99-894c-1f83484bbc3b] Running
	I0904 06:54:03.434573 1796928 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-520775" [e52ed283-6545-4336-8d7a-e26c18f54b4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0904 06:54:03.434586 1796928 system_pods.go:61] "metrics-server-746fcd58dc-gws8j" [16bf9326-2429-4d6b-a6ed-6dc44262c35e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 06:54:03.434594 1796928 system_pods.go:61] "storage-provisioner" [0f88021c-f0ad-4130-8cb1-06f073f45244] Running
	I0904 06:54:03.434602 1796928 system_pods.go:74] duration metric: took 6.460113ms to wait for pod list to return data ...
	I0904 06:54:03.434614 1796928 default_sa.go:34] waiting for default service account to be created ...
	I0904 06:54:03.437095 1796928 default_sa.go:45] found service account: "default"
	I0904 06:54:03.437116 1796928 default_sa.go:55] duration metric: took 2.49678ms for default service account to be created ...
	I0904 06:54:03.437124 1796928 system_pods.go:116] waiting for k8s-apps to be running ...
	I0904 06:54:03.439954 1796928 system_pods.go:86] 9 kube-system pods found
	I0904 06:54:03.439997 1796928 system_pods.go:89] "coredns-66bc5c9577-hm47q" [e73fad8a-ad1b-475f-a4ea-bfda49587ae8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 06:54:03.440010 1796928 system_pods.go:89] "etcd-default-k8s-diff-port-520775" [5829ac4b-ff8b-4d46-9be9-0947be850651] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0904 06:54:03.440018 1796928 system_pods.go:89] "kindnet-wz7lg" [8e231614-2126-4bd8-b77d-a4e98bfbcd0b] Running
	I0904 06:54:03.440029 1796928 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-520775" [95d6a6b9-81f2-48b3-8343-289600b99b2a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 06:54:03.440043 1796928 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-520775" [69053048-8fce-4b4b-8df8-a8f7415bf602] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0904 06:54:03.440053 1796928 system_pods.go:89] "kube-proxy-zrlrh" [df5878ee-bf16-4a99-894c-1f83484bbc3b] Running
	I0904 06:54:03.440060 1796928 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-520775" [e52ed283-6545-4336-8d7a-e26c18f54b4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0904 06:54:03.440072 1796928 system_pods.go:89] "metrics-server-746fcd58dc-gws8j" [16bf9326-2429-4d6b-a6ed-6dc44262c35e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 06:54:03.440078 1796928 system_pods.go:89] "storage-provisioner" [0f88021c-f0ad-4130-8cb1-06f073f45244] Running
	I0904 06:54:03.440085 1796928 system_pods.go:126] duration metric: took 2.955ms to wait for k8s-apps to be running ...
	I0904 06:54:03.440100 1796928 system_svc.go:44] waiting for kubelet service to be running ....
	I0904 06:54:03.440162 1796928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 06:54:03.451705 1796928 system_svc.go:56] duration metric: took 11.594555ms WaitForService to wait for kubelet
	I0904 06:54:03.451731 1796928 kubeadm.go:578] duration metric: took 7.004201759s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 06:54:03.451748 1796928 node_conditions.go:102] verifying NodePressure condition ...
	I0904 06:54:03.455005 1796928 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0904 06:54:03.455036 1796928 node_conditions.go:123] node cpu capacity is 8
	I0904 06:54:03.455062 1796928 node_conditions.go:105] duration metric: took 3.308068ms to run NodePressure ...
	I0904 06:54:03.455079 1796928 start.go:241] waiting for startup goroutines ...
	I0904 06:54:03.455095 1796928 start.go:246] waiting for cluster config update ...
	I0904 06:54:03.455112 1796928 start.go:255] writing updated cluster config ...
	I0904 06:54:03.455408 1796928 ssh_runner.go:195] Run: rm -f paused
	I0904 06:54:03.458944 1796928 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 06:54:03.462665 1796928 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hm47q" in "kube-system" namespace to be "Ready" or be gone ...
	W0904 06:54:04.757792 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:54:07.257591 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:54:05.468478 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:07.500893 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:09.756895 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:54:12.257352 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:54:09.968652 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:12.468012 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:14.756854 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:54:17.256905 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:54:14.468746 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:16.967726 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:18.968373 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:19.257325 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:54:21.757694 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	W0904 06:54:20.968633 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:23.467871 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:24.256489 1794879 pod_ready.go:104] pod "coredns-66bc5c9577-j5gww" is not "Ready", error: <nil>
	I0904 06:54:24.756710 1794879 pod_ready.go:94] pod "coredns-66bc5c9577-j5gww" is "Ready"
	I0904 06:54:24.756744 1794879 pod_ready.go:86] duration metric: took 31.505206553s for pod "coredns-66bc5c9577-j5gww" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:24.759357 1794879 pod_ready.go:83] waiting for pod "etcd-embed-certs-589812" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:24.763174 1794879 pod_ready.go:94] pod "etcd-embed-certs-589812" is "Ready"
	I0904 06:54:24.763194 1794879 pod_ready.go:86] duration metric: took 3.815458ms for pod "etcd-embed-certs-589812" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:24.765056 1794879 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-589812" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:24.768709 1794879 pod_ready.go:94] pod "kube-apiserver-embed-certs-589812" is "Ready"
	I0904 06:54:24.768729 1794879 pod_ready.go:86] duration metric: took 3.655905ms for pod "kube-apiserver-embed-certs-589812" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:24.770312 1794879 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-589812" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:24.955369 1794879 pod_ready.go:94] pod "kube-controller-manager-embed-certs-589812" is "Ready"
	I0904 06:54:24.955399 1794879 pod_ready.go:86] duration metric: took 185.06856ms for pod "kube-controller-manager-embed-certs-589812" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:25.155371 1794879 pod_ready.go:83] waiting for pod "kube-proxy-xqvlx" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:25.555016 1794879 pod_ready.go:94] pod "kube-proxy-xqvlx" is "Ready"
	I0904 06:54:25.555045 1794879 pod_ready.go:86] duration metric: took 399.644529ms for pod "kube-proxy-xqvlx" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:25.754864 1794879 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-589812" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:26.155740 1794879 pod_ready.go:94] pod "kube-scheduler-embed-certs-589812" is "Ready"
	I0904 06:54:26.155768 1794879 pod_ready.go:86] duration metric: took 400.874171ms for pod "kube-scheduler-embed-certs-589812" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:26.155779 1794879 pod_ready.go:40] duration metric: took 32.907618487s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 06:54:26.201526 1794879 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0904 06:54:26.203310 1794879 out.go:179] * Done! kubectl is now configured to use "embed-certs-589812" cluster and "default" namespace by default
	W0904 06:54:25.468180 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:27.468649 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:29.468703 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:31.967748 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	W0904 06:54:34.467966 1796928 pod_ready.go:104] pod "coredns-66bc5c9577-hm47q" is not "Ready", error: <nil>
	I0904 06:54:36.468207 1796928 pod_ready.go:94] pod "coredns-66bc5c9577-hm47q" is "Ready"
	I0904 06:54:36.468238 1796928 pod_ready.go:86] duration metric: took 33.005546695s for pod "coredns-66bc5c9577-hm47q" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:36.470247 1796928 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-520775" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:36.474087 1796928 pod_ready.go:94] pod "etcd-default-k8s-diff-port-520775" is "Ready"
	I0904 06:54:36.474113 1796928 pod_ready.go:86] duration metric: took 3.802864ms for pod "etcd-default-k8s-diff-port-520775" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:36.476057 1796928 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-520775" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:36.479419 1796928 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-520775" is "Ready"
	I0904 06:54:36.479437 1796928 pod_ready.go:86] duration metric: took 3.359104ms for pod "kube-apiserver-default-k8s-diff-port-520775" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:36.481399 1796928 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-520775" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:36.666267 1796928 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-520775" is "Ready"
	I0904 06:54:36.666294 1796928 pod_ready.go:86] duration metric: took 184.873705ms for pod "kube-controller-manager-default-k8s-diff-port-520775" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:36.866510 1796928 pod_ready.go:83] waiting for pod "kube-proxy-zrlrh" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:37.266395 1796928 pod_ready.go:94] pod "kube-proxy-zrlrh" is "Ready"
	I0904 06:54:37.266428 1796928 pod_ready.go:86] duration metric: took 399.888589ms for pod "kube-proxy-zrlrh" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:37.466543 1796928 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-520775" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:37.866935 1796928 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-520775" is "Ready"
	I0904 06:54:37.866974 1796928 pod_ready.go:86] duration metric: took 400.403816ms for pod "kube-scheduler-default-k8s-diff-port-520775" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:54:37.866986 1796928 pod_ready.go:40] duration metric: took 34.408008083s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 06:54:37.912300 1796928 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0904 06:54:37.913920 1796928 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-520775" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 04 07:08:15 old-k8s-version-869290 crio[682]: time="2025-09-04 07:08:15.120275415Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=edd01f05-7af3-4fcd-84cc-531c55df568c name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:08:25 old-k8s-version-869290 crio[682]: time="2025-09-04 07:08:25.119496988Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=3622263e-bec5-492b-af03-700c2a4ab453 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:08:25 old-k8s-version-869290 crio[682]: time="2025-09-04 07:08:25.119915894Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=3622263e-bec5-492b-af03-700c2a4ab453 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:08:29 old-k8s-version-869290 crio[682]: time="2025-09-04 07:08:29.119915042Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=ddf9b5fb-f6a9-4783-9932-bc91442a53a5 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:08:29 old-k8s-version-869290 crio[682]: time="2025-09-04 07:08:29.120177110Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=ddf9b5fb-f6a9-4783-9932-bc91442a53a5 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:08:37 old-k8s-version-869290 crio[682]: time="2025-09-04 07:08:37.120400485Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=46b0c36a-a71d-41c3-be0b-722ea7cc75cb name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:08:37 old-k8s-version-869290 crio[682]: time="2025-09-04 07:08:37.120739586Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=46b0c36a-a71d-41c3-be0b-722ea7cc75cb name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:08:42 old-k8s-version-869290 crio[682]: time="2025-09-04 07:08:42.120150812Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=5abcf980-60eb-4597-b25a-b3ed55307e8b name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:08:42 old-k8s-version-869290 crio[682]: time="2025-09-04 07:08:42.120437667Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=5abcf980-60eb-4597-b25a-b3ed55307e8b name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:08:49 old-k8s-version-869290 crio[682]: time="2025-09-04 07:08:49.120288480Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=2b3d7599-477a-41c6-9c59-91727582813b name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:08:49 old-k8s-version-869290 crio[682]: time="2025-09-04 07:08:49.120550691Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=2b3d7599-477a-41c6-9c59-91727582813b name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:08:56 old-k8s-version-869290 crio[682]: time="2025-09-04 07:08:56.119980557Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=de7a22a7-ea53-4788-9685-f86458e3238c name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:08:56 old-k8s-version-869290 crio[682]: time="2025-09-04 07:08:56.120267297Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=de7a22a7-ea53-4788-9685-f86458e3238c name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:09:04 old-k8s-version-869290 crio[682]: time="2025-09-04 07:09:04.119488995Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=ce392956-bd24-431c-b4a0-d1f7a553e2d1 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:09:04 old-k8s-version-869290 crio[682]: time="2025-09-04 07:09:04.119864360Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=ce392956-bd24-431c-b4a0-d1f7a553e2d1 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:09:08 old-k8s-version-869290 crio[682]: time="2025-09-04 07:09:08.120264595Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=d3993ca2-48c9-4e82-ae6d-8274cd07694b name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:09:08 old-k8s-version-869290 crio[682]: time="2025-09-04 07:09:08.120548088Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=d3993ca2-48c9-4e82-ae6d-8274cd07694b name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:09:18 old-k8s-version-869290 crio[682]: time="2025-09-04 07:09:18.120389317Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=73f5f121-08a6-42b1-8b1c-1e737be91013 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:09:18 old-k8s-version-869290 crio[682]: time="2025-09-04 07:09:18.120707189Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=73f5f121-08a6-42b1-8b1c-1e737be91013 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:09:23 old-k8s-version-869290 crio[682]: time="2025-09-04 07:09:23.119688934Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=bb3df7ac-2633-4178-94d3-64f22dd025c5 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:09:23 old-k8s-version-869290 crio[682]: time="2025-09-04 07:09:23.120013171Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=bb3df7ac-2633-4178-94d3-64f22dd025c5 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:09:29 old-k8s-version-869290 crio[682]: time="2025-09-04 07:09:29.119728065Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=06e39f9a-dff2-45b8-a469-d39126088693 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:09:29 old-k8s-version-869290 crio[682]: time="2025-09-04 07:09:29.120251807Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=06e39f9a-dff2-45b8-a469-d39126088693 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:09:34 old-k8s-version-869290 crio[682]: time="2025-09-04 07:09:34.120237507Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=8674652b-0462-4a6a-bb02-7b8b76aa0b4c name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:09:34 old-k8s-version-869290 crio[682]: time="2025-09-04 07:09:34.121464055Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=8674652b-0462-4a6a-bb02-7b8b76aa0b4c name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	49b779c58746a       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   About a minute ago   Exited              dashboard-metrics-scraper   8                   e09a6ababe5c0       dashboard-metrics-scraper-5f989dc9cf-b8rrc
	190aec8c45b0f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago       Running             storage-provisioner         2                   ef28d474b8abd       storage-provisioner
	b91e293d6f376       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c   18 minutes ago       Running             busybox                     1                   f607dd984555f       busybox
	ad32663a51f8a       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   18 minutes ago       Running             coredns                     1                   17968ce457a9c       coredns-5dd5756b68-plrdh
	619bf3076c8f2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago       Exited              storage-provisioner         1                   ef28d474b8abd       storage-provisioner
	5fd80a4de7446       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a   18 minutes ago       Running             kube-proxy                  1                   5411a53c1e3ce       kube-proxy-mk95k
	be0827961faeb       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   18 minutes ago       Running             kindnet-cni                 1                   9d4ac574b6b95       kindnet-qt2lt
	216c4f395c622       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   18 minutes ago       Running             etcd                        1                   f4dc1f328e53f       etcd-old-k8s-version-869290
	77f69d5438aa2       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95   18 minutes ago       Running             kube-apiserver              1                   933ae54e980db       kube-apiserver-old-k8s-version-869290
	c52fac654a506       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157   18 minutes ago       Running             kube-scheduler              1                   8cc5327484762       kube-scheduler-old-k8s-version-869290
	3cc2cf8e6bb3d       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62   18 minutes ago       Running             kube-controller-manager     1                   a8fe31e4f0451       kube-controller-manager-old-k8s-version-869290
	
	
	==> coredns [ad32663a51f8a226fee8527c4055d4e037a41fda7996a7fcd753ad350a4e0410] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:48154 - 60154 "HINFO IN 6168828961770051816.2673140864784376398. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025063103s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-869290
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-869290
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3fa37de45a2901b215fab008201edf72ce5a1ff
	                    minikube.k8s.io/name=old-k8s-version-869290
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_04T06_49_52_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Sep 2025 06:49:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-869290
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Sep 2025 07:09:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Sep 2025 07:07:13 +0000   Thu, 04 Sep 2025 06:49:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Sep 2025 07:07:13 +0000   Thu, 04 Sep 2025 06:49:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Sep 2025 07:07:13 +0000   Thu, 04 Sep 2025 06:49:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Sep 2025 07:07:13 +0000   Thu, 04 Sep 2025 06:50:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-869290
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 d1b2795dc83a4f7ba901f1f8ac9725e1
	  System UUID:                9a3a2904-3fd2-42f5-8dd5-d48ec28a2076
	  Boot ID:                    04ef57f1-30be-45a2-b84c-b20b1e806bda
	  Kernel Version:             5.15.0-1083-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-5dd5756b68-plrdh                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     19m
	  kube-system                 etcd-old-k8s-version-869290                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         19m
	  kube-system                 kindnet-qt2lt                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-old-k8s-version-869290             250m (3%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-old-k8s-version-869290    200m (2%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-mk95k                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-old-k8s-version-869290             100m (1%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 metrics-server-57f55c9bc5-9q8f6                   100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         19m
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-b8rrc        0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-ctkhj             0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 18m                kube-proxy       
	  Normal  NodeHasSufficientMemory  19m (x9 over 19m)  kubelet          Node old-k8s-version-869290 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node old-k8s-version-869290 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node old-k8s-version-869290 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     19m                kubelet          Node old-k8s-version-869290 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node old-k8s-version-869290 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node old-k8s-version-869290 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           19m                node-controller  Node old-k8s-version-869290 event: Registered Node old-k8s-version-869290 in Controller
	  Normal  NodeReady                19m                kubelet          Node old-k8s-version-869290 status is now: NodeReady
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node old-k8s-version-869290 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node old-k8s-version-869290 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x8 over 18m)  kubelet          Node old-k8s-version-869290 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18m                node-controller  Node old-k8s-version-869290 event: Registered Node old-k8s-version-869290 in Controller
	
	
	==> dmesg <==
	[  +0.000001] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +2.011770] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000001] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.000003] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000001] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +1.535866] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-806214837f28
	[  +0.000006] ll header: 00000000: d2 82 15 b6 69 3f aa b7 ff 9e ed 42 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-806214837f28
	[  +0.000001] ll header: 00000000: d2 82 15 b6 69 3f aa b7 ff 9e ed 42 08 00
	[  +0.003918] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-806214837f28
	[  +0.000006] ll header: 00000000: d2 82 15 b6 69 3f aa b7 ff 9e ed 42 08 00
	[  +2.555764] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000006] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.000023] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000004] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000001] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +8.191102] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000008] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.003970] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000005] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000002] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	
	
	==> etcd [216c4f395c622e64c119af83270d04476d7dff81ddcf948d0e2caa7e660d9156] <==
	{"level":"info","ts":"2025-09-04T06:50:51.522855Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-09-04T06:50:51.522888Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-09-04T06:50:51.523009Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-09-04T06:50:51.523019Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-09-04T06:50:53.103377Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-09-04T06:50:53.103435Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-09-04T06:50:53.10347Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-09-04T06:50:53.10349Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-09-04T06:50:53.103496Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-09-04T06:50:53.103507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-09-04T06:50:53.103514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-09-04T06:50:53.104427Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-869290 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-09-04T06:50:53.104487Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-04T06:50:53.104593Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-09-04T06:50:53.104616Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-09-04T06:50:53.104478Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-04T06:50:53.105823Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-09-04T06:50:53.105824Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-09-04T06:52:49.25707Z","caller":"traceutil/trace.go:171","msg":"trace[109768138] transaction","detail":"{read_only:false; response_revision:777; number_of_response:1; }","duration":"130.585589ms","start":"2025-09-04T06:52:49.126448Z","end":"2025-09-04T06:52:49.257033Z","steps":["trace[109768138] 'process raft request'  (duration: 85.9ms)","trace[109768138] 'compare'  (duration: 44.476392ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-04T07:00:53.124513Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":964}
	{"level":"info","ts":"2025-09-04T07:00:53.126603Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":964,"took":"1.801654ms","hash":3444427037}
	{"level":"info","ts":"2025-09-04T07:00:53.126655Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3444427037,"revision":964,"compact-revision":-1}
	{"level":"info","ts":"2025-09-04T07:05:53.129169Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1216}
	{"level":"info","ts":"2025-09-04T07:05:53.130293Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1216,"took":"826.097µs","hash":161135272}
	{"level":"info","ts":"2025-09-04T07:05:53.130325Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":161135272,"revision":1216,"compact-revision":964}
	
	
	==> kernel <==
	 07:09:39 up  4:52,  0 users,  load average: 0.38, 0.58, 1.13
	Linux old-k8s-version-869290 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [be0827961faeb2668d30a2191b7355359dc8d4c3c703ad7443cff934d506cb72] <==
	I0904 07:07:36.703954       1 main.go:301] handling current node
	I0904 07:07:46.701028       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 07:07:46.701064       1 main.go:301] handling current node
	I0904 07:07:56.707910       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 07:07:56.707959       1 main.go:301] handling current node
	I0904 07:08:06.702751       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 07:08:06.702785       1 main.go:301] handling current node
	I0904 07:08:16.701405       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 07:08:16.701444       1 main.go:301] handling current node
	I0904 07:08:26.707930       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 07:08:26.707966       1 main.go:301] handling current node
	I0904 07:08:36.701132       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 07:08:36.701186       1 main.go:301] handling current node
	I0904 07:08:46.703641       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 07:08:46.703672       1 main.go:301] handling current node
	I0904 07:08:56.707895       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 07:08:56.707929       1 main.go:301] handling current node
	I0904 07:09:06.701107       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 07:09:06.701147       1 main.go:301] handling current node
	I0904 07:09:16.703119       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 07:09:16.703151       1 main.go:301] handling current node
	I0904 07:09:26.703948       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 07:09:26.703993       1 main.go:301] handling current node
	I0904 07:09:36.702327       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0904 07:09:36.702390       1 main.go:301] handling current node
	
	
	==> kube-apiserver [77f69d5438aa2072ffdf6b91b3958e71249533445cfb6477abdfb8612bf08489] <==
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0904 07:05:55.420619       1 handler_proxy.go:93] no RequestInfo found in the context
	I0904 07:05:55.420633       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0904 07:05:55.420654       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0904 07:05:55.421781       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0904 07:06:54.148333       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.100.204.177:443: connect: connection refused
	I0904 07:06:54.148357       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0904 07:06:55.420857       1 handler_proxy.go:93] no RequestInfo found in the context
	E0904 07:06:55.420960       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0904 07:06:55.420971       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0904 07:06:55.421983       1 handler_proxy.go:93] no RequestInfo found in the context
	E0904 07:06:55.422016       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0904 07:06:55.422026       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0904 07:07:54.148684       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.100.204.177:443: connect: connection refused
	I0904 07:07:54.148722       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0904 07:08:54.147932       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.100.204.177:443: connect: connection refused
	I0904 07:08:54.147958       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0904 07:08:55.422133       1 handler_proxy.go:93] no RequestInfo found in the context
	W0904 07:08:55.422143       1 handler_proxy.go:93] no RequestInfo found in the context
	E0904 07:08:55.422207       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0904 07:08:55.422214       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0904 07:08:55.422226       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0904 07:08:55.423355       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [3cc2cf8e6bb3d7c6e97880baf7fee195f6522eec30032ed014e472ba43b31616] <==
	I0904 07:05:09.076317       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0904 07:05:38.130624       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="155.793µs"
	E0904 07:05:38.584094       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0904 07:05:39.083844       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0904 07:05:50.129786       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="155.299µs"
	E0904 07:06:08.588647       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0904 07:06:09.091237       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0904 07:06:38.593872       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0904 07:06:39.098369       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0904 07:07:08.598070       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0904 07:07:09.105653       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0904 07:07:38.602135       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0904 07:07:39.112443       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0904 07:08:01.311259       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="119.454µs"
	I0904 07:08:03.131246       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="225.805µs"
	E0904 07:08:08.607422       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0904 07:08:09.120152       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0904 07:08:09.181862       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="131.966µs"
	I0904 07:08:15.129570       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="119.346µs"
	E0904 07:08:38.612476       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0904 07:08:39.126330       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0904 07:09:08.617677       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0904 07:09:09.134104       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0904 07:09:38.622665       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0904 07:09:39.141843       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [5fd80a4de7446b801c5330df8fed98c34cc77d6bd01abc2aa9e5b5bb8d8015bd] <==
	I0904 06:50:56.422436       1 server_others.go:69] "Using iptables proxy"
	I0904 06:50:56.501444       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I0904 06:50:56.534461       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0904 06:50:56.536511       1 server_others.go:152] "Using iptables Proxier"
	I0904 06:50:56.536540       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0904 06:50:56.536547       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0904 06:50:56.536602       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0904 06:50:56.536960       1 server.go:846] "Version info" version="v1.28.0"
	I0904 06:50:56.537005       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 06:50:56.538082       1 config.go:188] "Starting service config controller"
	I0904 06:50:56.538113       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0904 06:50:56.538172       1 config.go:315] "Starting node config controller"
	I0904 06:50:56.538182       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0904 06:50:56.539580       1 config.go:97] "Starting endpoint slice config controller"
	I0904 06:50:56.539617       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0904 06:50:56.638442       1 shared_informer.go:318] Caches are synced for node config
	I0904 06:50:56.638538       1 shared_informer.go:318] Caches are synced for service config
	I0904 06:50:56.640074       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [c52fac654a5067fa08334b4a0d9d11c862aee02eb14d5aca97e094d63b613e72] <==
	I0904 06:50:52.162424       1 serving.go:348] Generated self-signed cert in-memory
	W0904 06:50:54.316382       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0904 06:50:54.316503       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0904 06:50:54.316545       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0904 06:50:54.316597       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0904 06:50:54.418583       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I0904 06:50:54.418614       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 06:50:54.420280       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 06:50:54.420323       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0904 06:50:54.421325       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0904 06:50:54.421494       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0904 06:50:54.521505       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 04 07:08:15 old-k8s-version-869290 kubelet[830]: E0904 07:08:15.120543     830 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9q8f6" podUID="e7c200f7-57db-4a22-a85a-a2c52168cce0"
	Sep 04 07:08:23 old-k8s-version-869290 kubelet[830]: I0904 07:08:23.119755     830 scope.go:117] "RemoveContainer" containerID="49b779c58746a54c3d42316198b78c47f051d5d4330ff2405ca811077b670e52"
	Sep 04 07:08:23 old-k8s-version-869290 kubelet[830]: E0904 07:08:23.120120     830 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-b8rrc_kubernetes-dashboard(590c1fe6-6767-404a-aed6-f3e0ae9cc472)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8rrc" podUID="590c1fe6-6767-404a-aed6-f3e0ae9cc472"
	Sep 04 07:08:25 old-k8s-version-869290 kubelet[830]: E0904 07:08:25.120229     830 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-ctkhj" podUID="191398b6-c62e-4c25-9bed-1fea30f5fed5"
	Sep 04 07:08:29 old-k8s-version-869290 kubelet[830]: E0904 07:08:29.120476     830 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9q8f6" podUID="e7c200f7-57db-4a22-a85a-a2c52168cce0"
	Sep 04 07:08:34 old-k8s-version-869290 kubelet[830]: I0904 07:08:34.119544     830 scope.go:117] "RemoveContainer" containerID="49b779c58746a54c3d42316198b78c47f051d5d4330ff2405ca811077b670e52"
	Sep 04 07:08:34 old-k8s-version-869290 kubelet[830]: E0904 07:08:34.120036     830 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-b8rrc_kubernetes-dashboard(590c1fe6-6767-404a-aed6-f3e0ae9cc472)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8rrc" podUID="590c1fe6-6767-404a-aed6-f3e0ae9cc472"
	Sep 04 07:08:37 old-k8s-version-869290 kubelet[830]: E0904 07:08:37.121109     830 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-ctkhj" podUID="191398b6-c62e-4c25-9bed-1fea30f5fed5"
	Sep 04 07:08:42 old-k8s-version-869290 kubelet[830]: E0904 07:08:42.120700     830 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9q8f6" podUID="e7c200f7-57db-4a22-a85a-a2c52168cce0"
	Sep 04 07:08:48 old-k8s-version-869290 kubelet[830]: I0904 07:08:48.119699     830 scope.go:117] "RemoveContainer" containerID="49b779c58746a54c3d42316198b78c47f051d5d4330ff2405ca811077b670e52"
	Sep 04 07:08:48 old-k8s-version-869290 kubelet[830]: E0904 07:08:48.120067     830 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-b8rrc_kubernetes-dashboard(590c1fe6-6767-404a-aed6-f3e0ae9cc472)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8rrc" podUID="590c1fe6-6767-404a-aed6-f3e0ae9cc472"
	Sep 04 07:08:49 old-k8s-version-869290 kubelet[830]: E0904 07:08:49.120912     830 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-ctkhj" podUID="191398b6-c62e-4c25-9bed-1fea30f5fed5"
	Sep 04 07:08:56 old-k8s-version-869290 kubelet[830]: E0904 07:08:56.120539     830 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9q8f6" podUID="e7c200f7-57db-4a22-a85a-a2c52168cce0"
	Sep 04 07:09:02 old-k8s-version-869290 kubelet[830]: I0904 07:09:02.119756     830 scope.go:117] "RemoveContainer" containerID="49b779c58746a54c3d42316198b78c47f051d5d4330ff2405ca811077b670e52"
	Sep 04 07:09:02 old-k8s-version-869290 kubelet[830]: E0904 07:09:02.120172     830 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-b8rrc_kubernetes-dashboard(590c1fe6-6767-404a-aed6-f3e0ae9cc472)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8rrc" podUID="590c1fe6-6767-404a-aed6-f3e0ae9cc472"
	Sep 04 07:09:04 old-k8s-version-869290 kubelet[830]: E0904 07:09:04.120137     830 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-ctkhj" podUID="191398b6-c62e-4c25-9bed-1fea30f5fed5"
	Sep 04 07:09:08 old-k8s-version-869290 kubelet[830]: E0904 07:09:08.120823     830 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9q8f6" podUID="e7c200f7-57db-4a22-a85a-a2c52168cce0"
	Sep 04 07:09:13 old-k8s-version-869290 kubelet[830]: I0904 07:09:13.119123     830 scope.go:117] "RemoveContainer" containerID="49b779c58746a54c3d42316198b78c47f051d5d4330ff2405ca811077b670e52"
	Sep 04 07:09:13 old-k8s-version-869290 kubelet[830]: E0904 07:09:13.119409     830 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-b8rrc_kubernetes-dashboard(590c1fe6-6767-404a-aed6-f3e0ae9cc472)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8rrc" podUID="590c1fe6-6767-404a-aed6-f3e0ae9cc472"
	Sep 04 07:09:18 old-k8s-version-869290 kubelet[830]: E0904 07:09:18.121296     830 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-ctkhj" podUID="191398b6-c62e-4c25-9bed-1fea30f5fed5"
	Sep 04 07:09:23 old-k8s-version-869290 kubelet[830]: E0904 07:09:23.120335     830 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9q8f6" podUID="e7c200f7-57db-4a22-a85a-a2c52168cce0"
	Sep 04 07:09:26 old-k8s-version-869290 kubelet[830]: I0904 07:09:26.119140     830 scope.go:117] "RemoveContainer" containerID="49b779c58746a54c3d42316198b78c47f051d5d4330ff2405ca811077b670e52"
	Sep 04 07:09:26 old-k8s-version-869290 kubelet[830]: E0904 07:09:26.119546     830 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-b8rrc_kubernetes-dashboard(590c1fe6-6767-404a-aed6-f3e0ae9cc472)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-b8rrc" podUID="590c1fe6-6767-404a-aed6-f3e0ae9cc472"
	Sep 04 07:09:29 old-k8s-version-869290 kubelet[830]: E0904 07:09:29.120605     830 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-ctkhj" podUID="191398b6-c62e-4c25-9bed-1fea30f5fed5"
	Sep 04 07:09:34 old-k8s-version-869290 kubelet[830]: E0904 07:09:34.121971     830 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9q8f6" podUID="e7c200f7-57db-4a22-a85a-a2c52168cce0"
	
	
	==> storage-provisioner [190aec8c45b0f19b4d7b202a54b0635d07eba13a1a6554ec4a037d1f8b416ed5] <==
	I0904 06:51:27.355253       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0904 06:51:27.362417       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0904 06:51:27.362449       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0904 06:51:44.755737       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0904 06:51:44.755841       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4bd924ae-d481-49b4-af7b-7da5f8f31cc5", APIVersion:"v1", ResourceVersion:"685", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-869290_0e5432b1-255e-42fb-9770-8fe9480f71a8 became leader
	I0904 06:51:44.755925       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-869290_0e5432b1-255e-42fb-9770-8fe9480f71a8!
	I0904 06:51:44.856210       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-869290_0e5432b1-255e-42fb-9770-8fe9480f71a8!
	
	
	==> storage-provisioner [619bf3076c8f2810f712ad4979d9483bea3ce02acaf717d24aa9ec66120b9bcb] <==
	I0904 06:50:56.405711       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0904 06:51:26.408292       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-869290 -n old-k8s-version-869290
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-869290 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-57f55c9bc5-9q8f6 kubernetes-dashboard-8694d4445c-ctkhj
helpers_test.go:282: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context old-k8s-version-869290 describe pod metrics-server-57f55c9bc5-9q8f6 kubernetes-dashboard-8694d4445c-ctkhj
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context old-k8s-version-869290 describe pod metrics-server-57f55c9bc5-9q8f6 kubernetes-dashboard-8694d4445c-ctkhj: exit status 1 (58.608108ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-9q8f6" not found
	Error from server (NotFound): pods "kubernetes-dashboard-8694d4445c-ctkhj" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context old-k8s-version-869290 describe pod metrics-server-57f55c9bc5-9q8f6 kubernetes-dashboard-8694d4445c-ctkhj: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (542.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (542.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-rf2hg" [0a81ba81-116f-4a44-ab32-2b3c88744009] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0904 07:02:57.336994 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-574576 -n no-preload-574576
start_stop_delete_test.go:285: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-09-04 07:09:54.850919715 +0000 UTC m=+4185.639950333
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context no-preload-574576 describe po kubernetes-dashboard-855c9754f9-rf2hg -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) kubectl --context no-preload-574576 describe po kubernetes-dashboard-855c9754f9-rf2hg -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-rf2hg
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             no-preload-574576/192.168.85.2
Start Time:       Thu, 04 Sep 2025 06:51:20 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tl5hg (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-tl5hg:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  18m                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rf2hg to no-preload-574576
Warning  Failed     16m                   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": loading manifest for target platform: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    13m (x5 over 18m)     kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     13m (x4 over 18m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     13m (x5 over 18m)     kubelet            Error: ErrImagePull
Normal   BackOff    3m26s (x49 over 18m)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     2m48s (x52 over 18m)  kubelet            Error: ImagePullBackOff
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context no-preload-574576 logs kubernetes-dashboard-855c9754f9-rf2hg -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context no-preload-574576 logs kubernetes-dashboard-855c9754f9-rf2hg -n kubernetes-dashboard: exit status 1 (76.286839ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-rf2hg" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:285: kubectl --context no-preload-574576 logs kubernetes-dashboard-855c9754f9-rf2hg -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-574576 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-574576
helpers_test.go:243: (dbg) docker inspect no-preload-574576:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1e2279782717f025e39ad8467676395819231d93016b9e8139858b4d2d72b2b2",
	        "Created": "2025-09-04T06:49:50.879365265Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1775251,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-04T06:51:03.89103056Z",
	            "FinishedAt": "2025-09-04T06:51:03.125518292Z"
	        },
	        "Image": "sha256:6f7d8b3ae805e64eb4efe058a75d43d384fe5989473cee7f8e24ea90eca28309",
	        "ResolvConfPath": "/var/lib/docker/containers/1e2279782717f025e39ad8467676395819231d93016b9e8139858b4d2d72b2b2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1e2279782717f025e39ad8467676395819231d93016b9e8139858b4d2d72b2b2/hostname",
	        "HostsPath": "/var/lib/docker/containers/1e2279782717f025e39ad8467676395819231d93016b9e8139858b4d2d72b2b2/hosts",
	        "LogPath": "/var/lib/docker/containers/1e2279782717f025e39ad8467676395819231d93016b9e8139858b4d2d72b2b2/1e2279782717f025e39ad8467676395819231d93016b9e8139858b4d2d72b2b2-json.log",
	        "Name": "/no-preload-574576",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-574576:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-574576",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1e2279782717f025e39ad8467676395819231d93016b9e8139858b4d2d72b2b2",
	                "LowerDir": "/var/lib/docker/overlay2/7c6f0b0f0b456f106f7785e42901c4a1fddb7aed999e4717209f60fdb8d4249f-init/diff:/var/lib/docker/overlay2/00af8677cb60c76ca825d07bd2d1267a5f0b2d8d1147a86a8eb7a1b8e0189af8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7c6f0b0f0b456f106f7785e42901c4a1fddb7aed999e4717209f60fdb8d4249f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7c6f0b0f0b456f106f7785e42901c4a1fddb7aed999e4717209f60fdb8d4249f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7c6f0b0f0b456f106f7785e42901c4a1fddb7aed999e4717209f60fdb8d4249f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-574576",
	                "Source": "/var/lib/docker/volumes/no-preload-574576/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-574576",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-574576",
	                "name.minikube.sigs.k8s.io": "no-preload-574576",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8aaa5a9c79bdf30a2cfa11cab0def2c8da5a2b1a89c15fab8d940ae32a5268ae",
	            "SandboxKey": "/var/run/docker/netns/8aaa5a9c79bd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34259"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34260"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34263"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34261"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34262"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-574576": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6a:18:65:52:11:64",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "512820bef1773b08fe7e32736d062562ad1b1adf8c8167147e68a5a3f69d7a8c",
	                    "EndpointID": "a7499aa96833efe822b154cf596a5437ccf18250c43f14c80d2d82618082223f",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-574576",
	                        "1e2279782717"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-574576 -n no-preload-574576
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-574576 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-574576 logs -n 25: (1.253108462s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p no-preload-574576 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-574576            │ jenkins │ v1.36.0 │ 04 Sep 25 06:50 UTC │ 04 Sep 25 06:51 UTC │
	│ addons  │ enable dashboard -p no-preload-574576 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-574576            │ jenkins │ v1.36.0 │ 04 Sep 25 06:51 UTC │ 04 Sep 25 06:51 UTC │
	│ start   │ -p no-preload-574576 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                       │ no-preload-574576            │ jenkins │ v1.36.0 │ 04 Sep 25 06:51 UTC │ 04 Sep 25 06:51 UTC │
	│ start   │ -p cert-expiration-620042 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-620042       │ jenkins │ v1.36.0 │ 04 Sep 25 06:51 UTC │ 04 Sep 25 06:51 UTC │
	│ delete  │ -p cert-expiration-620042                                                                                                                                                                                                                     │ cert-expiration-620042       │ jenkins │ v1.36.0 │ 04 Sep 25 06:51 UTC │ 04 Sep 25 06:52 UTC │
	│ start   │ -p embed-certs-589812 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                        │ embed-certs-589812           │ jenkins │ v1.36.0 │ 04 Sep 25 06:52 UTC │ 04 Sep 25 06:53 UTC │
	│ start   │ -p kubernetes-upgrade-892549 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-892549    │ jenkins │ v1.36.0 │ 04 Sep 25 06:52 UTC │                     │
	│ start   │ -p kubernetes-upgrade-892549 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-892549    │ jenkins │ v1.36.0 │ 04 Sep 25 06:52 UTC │ 04 Sep 25 06:52 UTC │
	│ delete  │ -p kubernetes-upgrade-892549                                                                                                                                                                                                                  │ kubernetes-upgrade-892549    │ jenkins │ v1.36.0 │ 04 Sep 25 06:52 UTC │ 04 Sep 25 06:52 UTC │
	│ delete  │ -p disable-driver-mounts-393542                                                                                                                                                                                                               │ disable-driver-mounts-393542 │ jenkins │ v1.36.0 │ 04 Sep 25 06:52 UTC │ 04 Sep 25 06:52 UTC │
	│ start   │ -p default-k8s-diff-port-520775 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-520775 │ jenkins │ v1.36.0 │ 04 Sep 25 06:52 UTC │ 04 Sep 25 06:53 UTC │
	│ addons  │ enable metrics-server -p embed-certs-589812 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-589812           │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:53 UTC │
	│ stop    │ -p embed-certs-589812 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-589812           │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:53 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-520775 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-520775 │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:53 UTC │
	│ stop    │ -p default-k8s-diff-port-520775 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-520775 │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:53 UTC │
	│ addons  │ enable dashboard -p embed-certs-589812 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-589812           │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:53 UTC │
	│ start   │ -p embed-certs-589812 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                        │ embed-certs-589812           │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:54 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-520775 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-520775 │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:53 UTC │
	│ start   │ -p default-k8s-diff-port-520775 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-520775 │ jenkins │ v1.36.0 │ 04 Sep 25 06:53 UTC │ 04 Sep 25 06:54 UTC │
	│ image   │ old-k8s-version-869290 image list --format=json                                                                                                                                                                                               │ old-k8s-version-869290       │ jenkins │ v1.36.0 │ 04 Sep 25 07:09 UTC │ 04 Sep 25 07:09 UTC │
	│ pause   │ -p old-k8s-version-869290 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-869290       │ jenkins │ v1.36.0 │ 04 Sep 25 07:09 UTC │ 04 Sep 25 07:09 UTC │
	│ unpause │ -p old-k8s-version-869290 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-869290       │ jenkins │ v1.36.0 │ 04 Sep 25 07:09 UTC │ 04 Sep 25 07:09 UTC │
	│ delete  │ -p old-k8s-version-869290                                                                                                                                                                                                                     │ old-k8s-version-869290       │ jenkins │ v1.36.0 │ 04 Sep 25 07:09 UTC │ 04 Sep 25 07:09 UTC │
	│ delete  │ -p old-k8s-version-869290                                                                                                                                                                                                                     │ old-k8s-version-869290       │ jenkins │ v1.36.0 │ 04 Sep 25 07:09 UTC │ 04 Sep 25 07:09 UTC │
	│ start   │ -p newest-cni-179620 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0 │ newest-cni-179620            │ jenkins │ v1.36.0 │ 04 Sep 25 07:09 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/04 07:09:46
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 07:09:46.328189 1810424 out.go:360] Setting OutFile to fd 1 ...
	I0904 07:09:46.328462 1810424 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 07:09:46.328472 1810424 out.go:374] Setting ErrFile to fd 2...
	I0904 07:09:46.328478 1810424 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 07:09:46.328706 1810424 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1516970/.minikube/bin
	I0904 07:09:46.329333 1810424 out.go:368] Setting JSON to false
	I0904 07:09:46.330578 1810424 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":17536,"bootTime":1756952250,"procs":265,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 07:09:46.330641 1810424 start.go:140] virtualization: kvm guest
	I0904 07:09:46.332893 1810424 out.go:179] * [newest-cni-179620] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 07:09:46.334341 1810424 notify.go:220] Checking for updates...
	I0904 07:09:46.334354 1810424 out.go:179]   - MINIKUBE_LOCATION=21409
	I0904 07:09:46.335856 1810424 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 07:09:46.337192 1810424 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-1516970/kubeconfig
	I0904 07:09:46.338535 1810424 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1516970/.minikube
	I0904 07:09:46.339889 1810424 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 07:09:46.341190 1810424 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 07:09:46.343120 1810424 config.go:182] Loaded profile config "default-k8s-diff-port-520775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 07:09:46.343283 1810424 config.go:182] Loaded profile config "embed-certs-589812": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 07:09:46.343433 1810424 config.go:182] Loaded profile config "no-preload-574576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 07:09:46.343611 1810424 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 07:09:46.366533 1810424 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0904 07:09:46.366614 1810424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 07:09:46.419659 1810424 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-09-04 07:09:46.409058532 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 07:09:46.419787 1810424 docker.go:318] overlay module found
	I0904 07:09:46.421637 1810424 out.go:179] * Using the docker driver based on user configuration
	I0904 07:09:46.422659 1810424 start.go:304] selected driver: docker
	I0904 07:09:46.422672 1810424 start.go:918] validating driver "docker" against <nil>
	I0904 07:09:46.422684 1810424 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 07:09:46.423663 1810424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 07:09:46.472438 1810424 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-09-04 07:09:46.463104912 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 07:09:46.472628 1810424 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W0904 07:09:46.472658 1810424 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0904 07:09:46.472901 1810424 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0904 07:09:46.474990 1810424 out.go:179] * Using Docker driver with root privileges
	I0904 07:09:46.476441 1810424 cni.go:84] Creating CNI manager for ""
	I0904 07:09:46.476516 1810424 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0904 07:09:46.476524 1810424 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0904 07:09:46.476591 1810424 start.go:348] cluster config:
	{Name:newest-cni-179620 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-179620 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 07:09:46.477884 1810424 out.go:179] * Starting "newest-cni-179620" primary control-plane node in "newest-cni-179620" cluster
	I0904 07:09:46.478978 1810424 cache.go:123] Beginning downloading kic base image for docker with crio
	I0904 07:09:46.480149 1810424 out.go:179] * Pulling base image v0.0.47-1756936034-21409 ...
	I0904 07:09:46.481308 1810424 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 07:09:46.481343 1810424 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21409-1516970/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0904 07:09:46.481344 1810424 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc in local docker daemon
	I0904 07:09:46.481353 1810424 cache.go:58] Caching tarball of preloaded images
	I0904 07:09:46.481456 1810424 preload.go:172] Found /home/jenkins/minikube-integration/21409-1516970/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0904 07:09:46.481469 1810424 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0904 07:09:46.481556 1810424 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/newest-cni-179620/config.json ...
	I0904 07:09:46.481576 1810424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/newest-cni-179620/config.json: {Name:mk649537256c21aba8d65f27a9d3a682e0a502aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 07:09:46.502047 1810424 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc in local docker daemon, skipping pull
	I0904 07:09:46.502069 1810424 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc exists in daemon, skipping load
	I0904 07:09:46.502089 1810424 cache.go:232] Successfully downloaded all kic artifacts
	I0904 07:09:46.502120 1810424 start.go:360] acquireMachinesLock for newest-cni-179620: {Name:mk1e027ef46bef9befa82f68ef87ec17f2567cc5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 07:09:46.502220 1810424 start.go:364] duration metric: took 79.902µs to acquireMachinesLock for "newest-cni-179620"
	I0904 07:09:46.502250 1810424 start.go:93] Provisioning new machine with config: &{Name:newest-cni-179620 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-179620 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0904 07:09:46.502333 1810424 start.go:125] createHost starting for "" (driver="docker")
	I0904 07:09:46.504325 1810424 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0904 07:09:46.504520 1810424 start.go:159] libmachine.API.Create for "newest-cni-179620" (driver="docker")
	I0904 07:09:46.504551 1810424 client.go:168] LocalClient.Create starting
	I0904 07:09:46.504665 1810424 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca.pem
	I0904 07:09:46.504719 1810424 main.go:141] libmachine: Decoding PEM data...
	I0904 07:09:46.504741 1810424 main.go:141] libmachine: Parsing certificate...
	I0904 07:09:46.504823 1810424 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/cert.pem
	I0904 07:09:46.504846 1810424 main.go:141] libmachine: Decoding PEM data...
	I0904 07:09:46.504854 1810424 main.go:141] libmachine: Parsing certificate...
	I0904 07:09:46.505168 1810424 cli_runner.go:164] Run: docker network inspect newest-cni-179620 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0904 07:09:46.522602 1810424 cli_runner.go:211] docker network inspect newest-cni-179620 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0904 07:09:46.522696 1810424 network_create.go:284] running [docker network inspect newest-cni-179620] to gather additional debugging logs...
	I0904 07:09:46.522716 1810424 cli_runner.go:164] Run: docker network inspect newest-cni-179620
	W0904 07:09:46.539778 1810424 cli_runner.go:211] docker network inspect newest-cni-179620 returned with exit code 1
	I0904 07:09:46.539831 1810424 network_create.go:287] error running [docker network inspect newest-cni-179620]: docker network inspect newest-cni-179620: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-179620 not found
	I0904 07:09:46.539852 1810424 network_create.go:289] output of [docker network inspect newest-cni-179620]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-179620 not found
	
	** /stderr **
	I0904 07:09:46.539938 1810424 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0904 07:09:46.558540 1810424 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6a5bc02d2a27 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:b0:fb:06:b8:46} reservation:<nil>}
	I0904 07:09:46.559354 1810424 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-7f4544d24f56 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:c9:24:c9:76:17} reservation:<nil>}
	I0904 07:09:46.560179 1810424 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8d033df89e75 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:42:94:35:ac:d0:4e} reservation:<nil>}
	I0904 07:09:46.561068 1810424 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e80fd0}
	I0904 07:09:46.561099 1810424 network_create.go:124] attempt to create docker network newest-cni-179620 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0904 07:09:46.561158 1810424 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-179620 newest-cni-179620
	I0904 07:09:46.615922 1810424 network_create.go:108] docker network newest-cni-179620 192.168.76.0/24 created
	I0904 07:09:46.615954 1810424 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-179620" container
	I0904 07:09:46.616041 1810424 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0904 07:09:46.633544 1810424 cli_runner.go:164] Run: docker volume create newest-cni-179620 --label name.minikube.sigs.k8s.io=newest-cni-179620 --label created_by.minikube.sigs.k8s.io=true
	I0904 07:09:46.652575 1810424 oci.go:103] Successfully created a docker volume newest-cni-179620
	I0904 07:09:46.652650 1810424 cli_runner.go:164] Run: docker run --rm --name newest-cni-179620-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-179620 --entrypoint /usr/bin/test -v newest-cni-179620:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc -d /var/lib
	I0904 07:09:47.111078 1810424 oci.go:107] Successfully prepared a docker volume newest-cni-179620
	I0904 07:09:47.111161 1810424 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 07:09:47.111192 1810424 kic.go:194] Starting extracting preloaded images to volume ...
	I0904 07:09:47.111278 1810424 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-1516970/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-179620:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc -I lz4 -xf /preloaded.tar -C /extractDir
	I0904 07:09:51.729341 1810424 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-1516970/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-179620:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc -I lz4 -xf /preloaded.tar -C /extractDir: (4.618000126s)
	I0904 07:09:51.729383 1810424 kic.go:203] duration metric: took 4.618185975s to extract preloaded images to volume ...
	W0904 07:09:51.729539 1810424 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0904 07:09:51.729640 1810424 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0904 07:09:51.779927 1810424 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-179620 --name newest-cni-179620 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-179620 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-179620 --network newest-cni-179620 --ip 192.168.76.2 --volume newest-cni-179620:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc
	I0904 07:09:52.056574 1810424 cli_runner.go:164] Run: docker container inspect newest-cni-179620 --format={{.State.Running}}
	I0904 07:09:52.077522 1810424 cli_runner.go:164] Run: docker container inspect newest-cni-179620 --format={{.State.Status}}
	I0904 07:09:52.097340 1810424 cli_runner.go:164] Run: docker exec newest-cni-179620 stat /var/lib/dpkg/alternatives/iptables
	I0904 07:09:52.140930 1810424 oci.go:144] the created container "newest-cni-179620" has a running status.
	I0904 07:09:52.140967 1810424 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-1516970/.minikube/machines/newest-cni-179620/id_rsa...
	I0904 07:09:52.438757 1810424 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-1516970/.minikube/machines/newest-cni-179620/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0904 07:09:52.465208 1810424 cli_runner.go:164] Run: docker container inspect newest-cni-179620 --format={{.State.Status}}
	I0904 07:09:52.484758 1810424 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0904 07:09:52.484794 1810424 kic_runner.go:114] Args: [docker exec --privileged newest-cni-179620 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0904 07:09:52.535034 1810424 cli_runner.go:164] Run: docker container inspect newest-cni-179620 --format={{.State.Status}}
	I0904 07:09:52.560528 1810424 machine.go:93] provisionDockerMachine start ...
	I0904 07:09:52.560635 1810424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-179620
	I0904 07:09:52.579753 1810424 main.go:141] libmachine: Using SSH client type: native
	I0904 07:09:52.580165 1810424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 34284 <nil> <nil>}
	I0904 07:09:52.580183 1810424 main.go:141] libmachine: About to run SSH command:
	hostname
	I0904 07:09:52.751617 1810424 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-179620
	
	I0904 07:09:52.751677 1810424 ubuntu.go:182] provisioning hostname "newest-cni-179620"
	I0904 07:09:52.751759 1810424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-179620
	I0904 07:09:52.771917 1810424 main.go:141] libmachine: Using SSH client type: native
	I0904 07:09:52.772156 1810424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 34284 <nil> <nil>}
	I0904 07:09:52.772170 1810424 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-179620 && echo "newest-cni-179620" | sudo tee /etc/hostname
	I0904 07:09:52.908325 1810424 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-179620
	
	I0904 07:09:52.908414 1810424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-179620
	I0904 07:09:52.928503 1810424 main.go:141] libmachine: Using SSH client type: native
	I0904 07:09:52.928777 1810424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 34284 <nil> <nil>}
	I0904 07:09:52.928800 1810424 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-179620' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-179620/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-179620' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0904 07:09:53.052430 1810424 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 07:09:53.052469 1810424 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-1516970/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-1516970/.minikube}
	I0904 07:09:53.052512 1810424 ubuntu.go:190] setting up certificates
	I0904 07:09:53.052528 1810424 provision.go:84] configureAuth start
	I0904 07:09:53.052584 1810424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-179620
	I0904 07:09:53.070722 1810424 provision.go:143] copyHostCerts
	I0904 07:09:53.070789 1810424 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.pem, removing ...
	I0904 07:09:53.070798 1810424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.pem
	I0904 07:09:53.070864 1810424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.pem (1082 bytes)
	I0904 07:09:53.070954 1810424 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1516970/.minikube/cert.pem, removing ...
	I0904 07:09:53.070976 1810424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1516970/.minikube/cert.pem
	I0904 07:09:53.071001 1810424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-1516970/.minikube/cert.pem (1123 bytes)
	I0904 07:09:53.071057 1810424 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1516970/.minikube/key.pem, removing ...
	I0904 07:09:53.071065 1810424 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1516970/.minikube/key.pem
	I0904 07:09:53.071101 1810424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-1516970/.minikube/key.pem (1675 bytes)
	I0904 07:09:53.071200 1810424 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca-key.pem org=jenkins.newest-cni-179620 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-179620]
	I0904 07:09:53.143327 1810424 provision.go:177] copyRemoteCerts
	I0904 07:09:53.143382 1810424 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0904 07:09:53.143432 1810424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-179620
	I0904 07:09:53.162016 1810424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34284 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/newest-cni-179620/id_rsa Username:docker}
	I0904 07:09:53.257124 1810424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0904 07:09:53.281349 1810424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0904 07:09:53.305597 1810424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0904 07:09:53.329526 1810424 provision.go:87] duration metric: took 276.979767ms to configureAuth
	I0904 07:09:53.329559 1810424 ubuntu.go:206] setting minikube options for container-runtime
	I0904 07:09:53.329777 1810424 config.go:182] Loaded profile config "newest-cni-179620": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 07:09:53.329886 1810424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-179620
	I0904 07:09:53.348586 1810424 main.go:141] libmachine: Using SSH client type: native
	I0904 07:09:53.348797 1810424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 34284 <nil> <nil>}
	I0904 07:09:53.348814 1810424 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0904 07:09:53.565995 1810424 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0904 07:09:53.566024 1810424 machine.go:96] duration metric: took 1.005473341s to provisionDockerMachine
	I0904 07:09:53.566036 1810424 client.go:171] duration metric: took 7.061475224s to LocalClient.Create
	I0904 07:09:53.566060 1810424 start.go:167] duration metric: took 7.061538524s to libmachine.API.Create "newest-cni-179620"
	I0904 07:09:53.566070 1810424 start.go:293] postStartSetup for "newest-cni-179620" (driver="docker")
	I0904 07:09:53.566084 1810424 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0904 07:09:53.566153 1810424 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0904 07:09:53.566218 1810424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-179620
	I0904 07:09:53.585067 1810424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34284 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/newest-cni-179620/id_rsa Username:docker}
	I0904 07:09:53.677768 1810424 ssh_runner.go:195] Run: cat /etc/os-release
	I0904 07:09:53.681401 1810424 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0904 07:09:53.681445 1810424 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0904 07:09:53.681465 1810424 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0904 07:09:53.681474 1810424 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0904 07:09:53.681485 1810424 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1516970/.minikube/addons for local assets ...
	I0904 07:09:53.681540 1810424 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1516970/.minikube/files for local assets ...
	I0904 07:09:53.681628 1810424 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-1516970/.minikube/files/etc/ssl/certs/15207162.pem -> 15207162.pem in /etc/ssl/certs
	I0904 07:09:53.681751 1810424 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0904 07:09:53.690373 1810424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/files/etc/ssl/certs/15207162.pem --> /etc/ssl/certs/15207162.pem (1708 bytes)
	I0904 07:09:53.713962 1810424 start.go:296] duration metric: took 147.872602ms for postStartSetup
	I0904 07:09:53.714424 1810424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-179620
	I0904 07:09:53.732329 1810424 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/newest-cni-179620/config.json ...
	I0904 07:09:53.732650 1810424 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 07:09:53.732703 1810424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-179620
	I0904 07:09:53.751468 1810424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34284 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/newest-cni-179620/id_rsa Username:docker}
	I0904 07:09:53.836973 1810424 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0904 07:09:53.841743 1810424 start.go:128] duration metric: took 7.339391301s to createHost
	I0904 07:09:53.841779 1810424 start.go:83] releasing machines lock for "newest-cni-179620", held for 7.339537336s
	I0904 07:09:53.841850 1810424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-179620
	I0904 07:09:53.859859 1810424 ssh_runner.go:195] Run: cat /version.json
	I0904 07:09:53.859916 1810424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-179620
	I0904 07:09:53.859914 1810424 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0904 07:09:53.860001 1810424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-179620
	I0904 07:09:53.878795 1810424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34284 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/newest-cni-179620/id_rsa Username:docker}
	I0904 07:09:53.878784 1810424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34284 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/newest-cni-179620/id_rsa Username:docker}
	I0904 07:09:54.036803 1810424 ssh_runner.go:195] Run: systemctl --version
	I0904 07:09:54.041100 1810424 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0904 07:09:54.181153 1810424 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0904 07:09:54.185734 1810424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 07:09:54.204909 1810424 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0904 07:09:54.205000 1810424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 07:09:54.235758 1810424 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0904 07:09:54.235790 1810424 start.go:495] detecting cgroup driver to use...
	I0904 07:09:54.235872 1810424 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0904 07:09:54.235948 1810424 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0904 07:09:54.252666 1810424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 07:09:54.264114 1810424 docker.go:218] disabling cri-docker service (if available) ...
	I0904 07:09:54.264178 1810424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0904 07:09:54.277526 1810424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0904 07:09:54.291316 1810424 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0904 07:09:54.370345 1810424 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0904 07:09:54.453939 1810424 docker.go:234] disabling docker service ...
	I0904 07:09:54.454027 1810424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0904 07:09:54.474647 1810424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0904 07:09:54.486505 1810424 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0904 07:09:54.571399 1810424 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0904 07:09:54.665173 1810424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0904 07:09:54.677543 1810424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 07:09:54.694164 1810424 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0904 07:09:54.694232 1810424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:09:54.703637 1810424 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0904 07:09:54.703765 1810424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:09:54.713412 1810424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:09:54.722609 1810424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:09:54.732383 1810424 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0904 07:09:54.741831 1810424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:09:54.751647 1810424 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:09:54.769687 1810424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:09:54.782584 1810424 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0904 07:09:54.792657 1810424 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0904 07:09:54.803095 1810424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 07:09:54.883717 1810424 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0904 07:09:55.000934 1810424 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0904 07:09:55.001008 1810424 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0904 07:09:55.005450 1810424 start.go:563] Will wait 60s for crictl version
	I0904 07:09:55.005503 1810424 ssh_runner.go:195] Run: which crictl
	I0904 07:09:55.008932 1810424 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0904 07:09:55.048694 1810424 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0904 07:09:55.048775 1810424 ssh_runner.go:195] Run: crio --version
	I0904 07:09:55.091240 1810424 ssh_runner.go:195] Run: crio --version
	I0904 07:09:55.134553 1810424 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0904 07:09:55.135983 1810424 cli_runner.go:164] Run: docker network inspect newest-cni-179620 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0904 07:09:55.157299 1810424 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0904 07:09:55.161191 1810424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 07:09:55.176347 1810424 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	
	
	==> CRI-O <==
	Sep 04 07:08:33 no-preload-574576 crio[666]: time="2025-09-04 07:08:33.236385582Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=0f84088d-4d7d-4816-aaa2-e4266ef65665 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:08:46 no-preload-574576 crio[666]: time="2025-09-04 07:08:46.236338845Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=1a0ddac2-a85c-4b24-b253-635cedc8ad3b name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:08:46 no-preload-574576 crio[666]: time="2025-09-04 07:08:46.236670291Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=1a0ddac2-a85c-4b24-b253-635cedc8ad3b name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:08:48 no-preload-574576 crio[666]: time="2025-09-04 07:08:48.236260700Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=dd9402ed-b9ba-4df2-9e09-9466f0b54f70 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:08:48 no-preload-574576 crio[666]: time="2025-09-04 07:08:48.236549725Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=dd9402ed-b9ba-4df2-9e09-9466f0b54f70 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:08:58 no-preload-574576 crio[666]: time="2025-09-04 07:08:58.235994225Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=be195815-79fd-41a5-b061-210cebd04a77 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:08:58 no-preload-574576 crio[666]: time="2025-09-04 07:08:58.236363188Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=be195815-79fd-41a5-b061-210cebd04a77 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:09:00 no-preload-574576 crio[666]: time="2025-09-04 07:09:00.236957596Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=b37edd08-c8ec-4158-a368-702581f7f75f name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:09:00 no-preload-574576 crio[666]: time="2025-09-04 07:09:00.237161381Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=b37edd08-c8ec-4158-a368-702581f7f75f name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:09:11 no-preload-574576 crio[666]: time="2025-09-04 07:09:11.236460389Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=591d828f-49d4-4f4d-9b87-8a501efd8019 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:09:11 no-preload-574576 crio[666]: time="2025-09-04 07:09:11.236497627Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=8cd063fc-6046-49a0-b185-acee3d128930 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:09:11 no-preload-574576 crio[666]: time="2025-09-04 07:09:11.236723247Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=591d828f-49d4-4f4d-9b87-8a501efd8019 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:09:11 no-preload-574576 crio[666]: time="2025-09-04 07:09:11.236797924Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=8cd063fc-6046-49a0-b185-acee3d128930 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:09:22 no-preload-574576 crio[666]: time="2025-09-04 07:09:22.235859295Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=81d03b4f-d1d0-4fa8-a8f4-b377902f3352 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:09:22 no-preload-574576 crio[666]: time="2025-09-04 07:09:22.236180707Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=81d03b4f-d1d0-4fa8-a8f4-b377902f3352 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:09:25 no-preload-574576 crio[666]: time="2025-09-04 07:09:25.236649675Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=bea467e0-10ed-4286-bd6b-5d6971d49178 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:09:25 no-preload-574576 crio[666]: time="2025-09-04 07:09:25.236948343Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=bea467e0-10ed-4286-bd6b-5d6971d49178 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:09:36 no-preload-574576 crio[666]: time="2025-09-04 07:09:36.236665680Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=04bf5172-1c28-4d7d-8c22-fed6bbba853d name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:09:36 no-preload-574576 crio[666]: time="2025-09-04 07:09:36.236968494Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=04bf5172-1c28-4d7d-8c22-fed6bbba853d name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:09:40 no-preload-574576 crio[666]: time="2025-09-04 07:09:40.236235677Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=9d84b37a-4fd8-4bb3-93db-e324262a09b5 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:09:40 no-preload-574576 crio[666]: time="2025-09-04 07:09:40.236580329Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=9d84b37a-4fd8-4bb3-93db-e324262a09b5 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:09:51 no-preload-574576 crio[666]: time="2025-09-04 07:09:51.236419343Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=a0dde062-8987-475e-a87e-5632523ad0cd name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:09:51 no-preload-574576 crio[666]: time="2025-09-04 07:09:51.236689192Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=a0dde062-8987-475e-a87e-5632523ad0cd name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:09:52 no-preload-574576 crio[666]: time="2025-09-04 07:09:52.236189901Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=453b75e1-c33a-48fb-ad27-8c1f78951dc4 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:09:52 no-preload-574576 crio[666]: time="2025-09-04 07:09:52.243890815Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=453b75e1-c33a-48fb-ad27-8c1f78951dc4 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	cb3e2db786946       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   2 minutes ago       Exited              dashboard-metrics-scraper   8                   99b9a9cdc190d       dashboard-metrics-scraper-6ffb444bf9-wm46d
	c8136e0896839       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago      Running             storage-provisioner         2                   d0ee403e6035f       storage-provisioner
	a21465d8b7fdd       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   18 minutes ago      Running             coredns                     1                   408c41dd6d4e9       coredns-66bc5c9577-g4ljx
	6bca85b30355c       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c   18 minutes ago      Running             busybox                     1                   8f97a902d3bce       busybox
	350de3861b1dc       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   18 minutes ago      Running             kindnet-cni                 1                   b36df929f7b38       kindnet-w6frr
	739d378171e97       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   18 minutes ago      Running             kube-proxy                  1                   2f70166fd9f50       kube-proxy-9mbq6
	55592e1198d59       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago      Exited              storage-provisioner         1                   d0ee403e6035f       storage-provisioner
	0a2bb5e07e675       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   18 minutes ago      Running             etcd                        1                   4f56ae4464038       etcd-no-preload-574576
	bca0ae139442e       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   18 minutes ago      Running             kube-apiserver              1                   e64edcea25c8c       kube-apiserver-no-preload-574576
	6781db6486f53       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   18 minutes ago      Running             kube-scheduler              1                   aa66d60fa2806       kube-scheduler-no-preload-574576
	b8bcf79ea0251       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   18 minutes ago      Running             kube-controller-manager     1                   11502392a0613       kube-controller-manager-no-preload-574576
	
	
	==> coredns [a21465d8b7fddb1579125e0031a25e9e42476eb09b47d3c11f86cb5f968a86a6] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37281 - 7213 "HINFO IN 7968160076350310455.3401697822833025019. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.044332656s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-574576
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-574576
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3fa37de45a2901b215fab008201edf72ce5a1ff
	                    minikube.k8s.io/name=no-preload-574576
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_04T06_50_20_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Sep 2025 06:50:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-574576
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Sep 2025 07:09:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Sep 2025 07:08:55 +0000   Thu, 04 Sep 2025 06:50:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Sep 2025 07:08:55 +0000   Thu, 04 Sep 2025 06:50:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Sep 2025 07:08:55 +0000   Thu, 04 Sep 2025 06:50:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Sep 2025 07:08:55 +0000   Thu, 04 Sep 2025 06:50:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-574576
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 d5052de359b54ec3a3ddba9267f3f8f8
	  System UUID:                008625d3-fb91-460f-8e35-73af0d41b639
	  Boot ID:                    04ef57f1-30be-45a2-b84c-b20b1e806bda
	  Kernel Version:             5.15.0-1083-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-66bc5c9577-g4ljx                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     19m
	  kube-system                 etcd-no-preload-574576                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         19m
	  kube-system                 kindnet-w6frr                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-no-preload-574576              250m (3%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-no-preload-574576     200m (2%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-9mbq6                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-no-preload-574576              100m (1%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 metrics-server-746fcd58dc-7qmkr               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         19m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-wm46d    0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-rf2hg         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 19m                kube-proxy       
	  Normal   Starting                 18m                kube-proxy       
	  Normal   NodeHasSufficientPID     19m                kubelet          Node no-preload-574576 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 19m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  19m                kubelet          Node no-preload-574576 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19m                kubelet          Node no-preload-574576 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 19m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           19m                node-controller  Node no-preload-574576 event: Registered Node no-preload-574576 in Controller
	  Normal   NodeReady                19m                kubelet          Node no-preload-574576 status is now: NodeReady
	  Normal   Starting                 18m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 18m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node no-preload-574576 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node no-preload-574576 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m (x8 over 18m)  kubelet          Node no-preload-574576 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           18m                node-controller  Node no-preload-574576 event: Registered Node no-preload-574576 in Controller
	
	
	==> dmesg <==
	[  +0.000001] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +2.011770] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000001] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.000003] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000001] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +1.535866] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-806214837f28
	[  +0.000006] ll header: 00000000: d2 82 15 b6 69 3f aa b7 ff 9e ed 42 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-806214837f28
	[  +0.000001] ll header: 00000000: d2 82 15 b6 69 3f aa b7 ff 9e ed 42 08 00
	[  +0.003918] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-806214837f28
	[  +0.000006] ll header: 00000000: d2 82 15 b6 69 3f aa b7 ff 9e ed 42 08 00
	[  +2.555764] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000006] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.000023] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000004] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000001] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +8.191102] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000008] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.003970] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000005] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000002] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	
	
	==> etcd [0a2bb5e07e675a06d7d5365f4ea46671cdd16bdeeefe39c7a4a4d25750de1c68] <==
	{"level":"warn","ts":"2025-09-04T06:51:13.406438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:51:13.413499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:51:13.419824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:51:13.425947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:51:13.432855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:51:13.439381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:51:13.446504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:51:13.476087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:51:13.479424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:51:13.503390Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:51:13.509547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:51:13.556727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58348","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-04T06:52:04.359170Z","caller":"traceutil/trace.go:172","msg":"trace[719735697] transaction","detail":"{read_only:false; response_revision:675; number_of_response:1; }","duration":"119.061808ms","start":"2025-09-04T06:52:04.240087Z","end":"2025-09-04T06:52:04.359149Z","steps":["trace[719735697] 'process raft request'  (duration: 118.955483ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T06:52:05.626053Z","caller":"traceutil/trace.go:172","msg":"trace[918571184] transaction","detail":"{read_only:false; response_revision:680; number_of_response:1; }","duration":"166.645055ms","start":"2025-09-04T06:52:05.459388Z","end":"2025-09-04T06:52:05.626033Z","steps":["trace[918571184] 'process raft request'  (duration: 83.482308ms)","trace[918571184] 'compare'  (duration: 83.040109ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-04T06:52:48.384673Z","caller":"traceutil/trace.go:172","msg":"trace[836855457] linearizableReadLoop","detail":"{readStateIndex:782; appliedIndex:782; }","duration":"132.164763ms","start":"2025-09-04T06:52:48.252486Z","end":"2025-09-04T06:52:48.384651Z","steps":["trace[836855457] 'read index received'  (duration: 132.156788ms)","trace[836855457] 'applied index is now lower than readState.Index'  (duration: 6.756µs)"],"step_count":2}
	{"level":"info","ts":"2025-09-04T06:52:48.384855Z","caller":"traceutil/trace.go:172","msg":"trace[1954956525] transaction","detail":"{read_only:false; response_revision:733; number_of_response:1; }","duration":"141.725704ms","start":"2025-09-04T06:52:48.243112Z","end":"2025-09-04T06:52:48.384838Z","steps":["trace[1954956525] 'process raft request'  (duration: 141.569723ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-04T06:52:48.384899Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.362913ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rf2hg.186201bca8ca4cab\" limit:1 ","response":"range_response_count:1 size:947"}
	{"level":"info","ts":"2025-09-04T06:52:48.384989Z","caller":"traceutil/trace.go:172","msg":"trace[1911627484] range","detail":"{range_begin:/registry/events/kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rf2hg.186201bca8ca4cab; range_end:; response_count:1; response_revision:732; }","duration":"132.502342ms","start":"2025-09-04T06:52:48.252475Z","end":"2025-09-04T06:52:48.384977Z","steps":["trace[1911627484] 'agreement among raft nodes before linearized reading'  (duration: 132.267398ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T06:52:50.274391Z","caller":"traceutil/trace.go:172","msg":"trace[1344793255] transaction","detail":"{read_only:false; response_revision:740; number_of_response:1; }","duration":"124.275044ms","start":"2025-09-04T06:52:50.150089Z","end":"2025-09-04T06:52:50.274364Z","steps":["trace[1344793255] 'process raft request'  (duration: 124.068613ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T07:01:12.718736Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":987}
	{"level":"info","ts":"2025-09-04T07:01:12.725295Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":987,"took":"6.249406ms","hash":4270930048,"current-db-size-bytes":3145728,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":3145728,"current-db-size-in-use":"3.1 MB"}
	{"level":"info","ts":"2025-09-04T07:01:12.725340Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":4270930048,"revision":987,"compact-revision":-1}
	{"level":"info","ts":"2025-09-04T07:06:12.724449Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1267}
	{"level":"info","ts":"2025-09-04T07:06:12.727641Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1267,"took":"2.752917ms","hash":1250227502,"current-db-size-bytes":3145728,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":1814528,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-09-04T07:06:12.727683Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1250227502,"revision":1267,"compact-revision":987}
	
	
	==> kernel <==
	 07:09:56 up  4:52,  0 users,  load average: 0.83, 0.67, 1.15
	Linux no-preload-574576 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [350de3861b1dc56b7e34601fd88f6d1ab9a8f3908d667be044393fda23dca64a] <==
	I0904 07:07:46.502994       1 main.go:301] handling current node
	I0904 07:07:56.509823       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0904 07:07:56.509867       1 main.go:301] handling current node
	I0904 07:08:06.503080       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0904 07:08:06.503115       1 main.go:301] handling current node
	I0904 07:08:16.503683       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0904 07:08:16.503735       1 main.go:301] handling current node
	I0904 07:08:26.504740       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0904 07:08:26.504775       1 main.go:301] handling current node
	I0904 07:08:36.507901       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0904 07:08:36.507938       1 main.go:301] handling current node
	I0904 07:08:46.503346       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0904 07:08:46.503388       1 main.go:301] handling current node
	I0904 07:08:56.503457       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0904 07:08:56.503495       1 main.go:301] handling current node
	I0904 07:09:06.512149       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0904 07:09:06.512183       1 main.go:301] handling current node
	I0904 07:09:16.511898       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0904 07:09:16.511934       1 main.go:301] handling current node
	I0904 07:09:26.503023       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0904 07:09:26.503059       1 main.go:301] handling current node
	I0904 07:09:36.511871       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0904 07:09:36.511908       1 main.go:301] handling current node
	I0904 07:09:46.511922       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0904 07:09:46.511968       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bca0ae139442e7d50d3cddbc0fc77c7d71f27421ae41c30357e7538da5f054bf] <==
	I0904 07:06:15.237479       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0904 07:06:40.304951       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 07:07:02.750431       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0904 07:07:15.236532       1 handler_proxy.go:99] no RequestInfo found in the context
	E0904 07:07:15.236587       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0904 07:07:15.236605       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0904 07:07:15.237683       1 handler_proxy.go:99] no RequestInfo found in the context
	E0904 07:07:15.237745       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0904 07:07:15.237757       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0904 07:07:56.002117       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 07:08:28.454331       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 07:09:09.518452       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0904 07:09:15.237381       1 handler_proxy.go:99] no RequestInfo found in the context
	E0904 07:09:15.237428       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0904 07:09:15.237443       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0904 07:09:15.238546       1 handler_proxy.go:99] no RequestInfo found in the context
	E0904 07:09:15.238656       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0904 07:09:15.238675       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0904 07:09:33.411968       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [b8bcf79ea02511930b5221e35df8b6b4b686e5b9f11a570db679378717f0b0a3] <==
	I0904 07:03:49.755675       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:04:19.680101       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:04:19.763157       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:04:49.684457       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:04:49.769673       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:05:19.688971       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:05:19.777174       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:05:49.692822       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:05:49.784412       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:06:19.697342       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:06:19.791537       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:06:49.701880       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:06:49.797843       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:07:19.706947       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:07:19.804600       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:07:49.711538       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:07:49.811734       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:08:19.716373       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:08:19.818480       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:08:49.720117       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:08:49.824965       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:09:19.725239       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:09:19.832461       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:09:49.729493       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:09:49.839576       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [739d378171e97dd327b3f332900b3b60caca10a991c55f3e28c636ae1afab805] <==
	I0904 06:51:16.144228       1 server_linux.go:53] "Using iptables proxy"
	I0904 06:51:16.317230       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0904 06:51:16.417386       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0904 06:51:16.417464       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0904 06:51:16.417579       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0904 06:51:16.437820       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0904 06:51:16.437885       1 server_linux.go:132] "Using iptables Proxier"
	I0904 06:51:16.441933       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0904 06:51:16.442308       1 server.go:527] "Version info" version="v1.34.0"
	I0904 06:51:16.442350       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 06:51:16.445117       1 config.go:200] "Starting service config controller"
	I0904 06:51:16.445133       1 config.go:403] "Starting serviceCIDR config controller"
	I0904 06:51:16.445143       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0904 06:51:16.445148       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0904 06:51:16.445134       1 config.go:106] "Starting endpoint slice config controller"
	I0904 06:51:16.445173       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0904 06:51:16.445245       1 config.go:309] "Starting node config controller"
	I0904 06:51:16.445283       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0904 06:51:16.445315       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0904 06:51:16.545975       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0904 06:51:16.546010       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0904 06:51:16.545995       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6781db6486f532f600b6522565654ef4da9df25769e534e5680c3d8ca37fa996] <==
	I0904 06:51:12.725113       1 serving.go:386] Generated self-signed cert in-memory
	W0904 06:51:14.216143       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0904 06:51:14.216244       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0904 06:51:14.216258       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0904 06:51:14.216268       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0904 06:51:14.418226       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0904 06:51:14.418267       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 06:51:14.500611       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 06:51:14.500662       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 06:51:14.501793       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0904 06:51:14.501953       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0904 06:51:14.601187       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 04 07:09:10 no-preload-574576 kubelet[802]: E0904 07:09:10.391788     802 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756969750391440234  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 04 07:09:11 no-preload-574576 kubelet[802]: E0904 07:09:11.237056     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-7qmkr" podUID="14f3f7b5-1a03-4bc5-b95b-0a35a2a86978"
	Sep 04 07:09:11 no-preload-574576 kubelet[802]: E0904 07:09:11.237067     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rf2hg" podUID="0a81ba81-116f-4a44-ab32-2b3c88744009"
	Sep 04 07:09:16 no-preload-574576 kubelet[802]: I0904 07:09:16.235344     802 scope.go:117] "RemoveContainer" containerID="cb3e2db786946353e9d2bb75e50c58cacd83a2cdb9af0973d12c36f118056b4a"
	Sep 04 07:09:16 no-preload-574576 kubelet[802]: E0904 07:09:16.235574     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wm46d_kubernetes-dashboard(399b73af-1776-4973-905e-d26f180167cb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wm46d" podUID="399b73af-1776-4973-905e-d26f180167cb"
	Sep 04 07:09:20 no-preload-574576 kubelet[802]: E0904 07:09:20.392999     802 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756969760392793752  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 04 07:09:20 no-preload-574576 kubelet[802]: E0904 07:09:20.393043     802 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756969760392793752  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 04 07:09:22 no-preload-574576 kubelet[802]: E0904 07:09:22.236487     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-7qmkr" podUID="14f3f7b5-1a03-4bc5-b95b-0a35a2a86978"
	Sep 04 07:09:25 no-preload-574576 kubelet[802]: E0904 07:09:25.237362     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rf2hg" podUID="0a81ba81-116f-4a44-ab32-2b3c88744009"
	Sep 04 07:09:27 no-preload-574576 kubelet[802]: I0904 07:09:27.235625     802 scope.go:117] "RemoveContainer" containerID="cb3e2db786946353e9d2bb75e50c58cacd83a2cdb9af0973d12c36f118056b4a"
	Sep 04 07:09:27 no-preload-574576 kubelet[802]: E0904 07:09:27.235924     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wm46d_kubernetes-dashboard(399b73af-1776-4973-905e-d26f180167cb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wm46d" podUID="399b73af-1776-4973-905e-d26f180167cb"
	Sep 04 07:09:30 no-preload-574576 kubelet[802]: E0904 07:09:30.394179     802 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756969770393947562  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 04 07:09:30 no-preload-574576 kubelet[802]: E0904 07:09:30.394227     802 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756969770393947562  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 04 07:09:36 no-preload-574576 kubelet[802]: E0904 07:09:36.237259     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-7qmkr" podUID="14f3f7b5-1a03-4bc5-b95b-0a35a2a86978"
	Sep 04 07:09:40 no-preload-574576 kubelet[802]: E0904 07:09:40.237010     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rf2hg" podUID="0a81ba81-116f-4a44-ab32-2b3c88744009"
	Sep 04 07:09:40 no-preload-574576 kubelet[802]: E0904 07:09:40.395525     802 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756969780395262335  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 04 07:09:40 no-preload-574576 kubelet[802]: E0904 07:09:40.395567     802 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756969780395262335  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 04 07:09:41 no-preload-574576 kubelet[802]: I0904 07:09:41.236055     802 scope.go:117] "RemoveContainer" containerID="cb3e2db786946353e9d2bb75e50c58cacd83a2cdb9af0973d12c36f118056b4a"
	Sep 04 07:09:41 no-preload-574576 kubelet[802]: E0904 07:09:41.236243     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wm46d_kubernetes-dashboard(399b73af-1776-4973-905e-d26f180167cb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wm46d" podUID="399b73af-1776-4973-905e-d26f180167cb"
	Sep 04 07:09:50 no-preload-574576 kubelet[802]: E0904 07:09:50.397474     802 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756969790397226448  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 04 07:09:50 no-preload-574576 kubelet[802]: E0904 07:09:50.397516     802 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756969790397226448  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:154065}  inodes_used:{value:59}}"
	Sep 04 07:09:51 no-preload-574576 kubelet[802]: E0904 07:09:51.237040     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-7qmkr" podUID="14f3f7b5-1a03-4bc5-b95b-0a35a2a86978"
	Sep 04 07:09:52 no-preload-574576 kubelet[802]: E0904 07:09:52.244336     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rf2hg" podUID="0a81ba81-116f-4a44-ab32-2b3c88744009"
	Sep 04 07:09:55 no-preload-574576 kubelet[802]: I0904 07:09:55.236270     802 scope.go:117] "RemoveContainer" containerID="cb3e2db786946353e9d2bb75e50c58cacd83a2cdb9af0973d12c36f118056b4a"
	Sep 04 07:09:55 no-preload-574576 kubelet[802]: E0904 07:09:55.236436     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wm46d_kubernetes-dashboard(399b73af-1776-4973-905e-d26f180167cb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wm46d" podUID="399b73af-1776-4973-905e-d26f180167cb"
	
	
	==> storage-provisioner [55592e1198d594770403fcc20e6174ff3e1f124050a8d46f6a49c878245932fe] <==
	I0904 06:51:16.018726       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0904 06:51:46.021639       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [c8136e0896839ed9725ac3a18e4cdc34fca2f12d8852b78fa7c810b6e5e09950] <==
	W0904 07:09:32.119342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:09:34.122429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:09:34.126507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:09:36.129104       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:09:36.134438       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:09:38.137483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:09:38.141714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:09:40.144572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:09:40.150113       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:09:42.154184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:09:42.158278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:09:44.161653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:09:44.165465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:09:46.169100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:09:46.173468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:09:48.177465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:09:48.183780       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:09:50.187583       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:09:50.204796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:09:52.209281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:09:52.213659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:09:54.216727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:09:54.222858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:09:56.227577       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:09:56.231785       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-574576 -n no-preload-574576
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-574576 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-7qmkr kubernetes-dashboard-855c9754f9-rf2hg
helpers_test.go:282: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context no-preload-574576 describe pod metrics-server-746fcd58dc-7qmkr kubernetes-dashboard-855c9754f9-rf2hg
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context no-preload-574576 describe pod metrics-server-746fcd58dc-7qmkr kubernetes-dashboard-855c9754f9-rf2hg: exit status 1 (67.86957ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-7qmkr" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-rf2hg" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context no-preload-574576 describe pod metrics-server-746fcd58dc-7qmkr kubernetes-dashboard-855c9754f9-rf2hg: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (542.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (544.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-wlwcq" [ddf273f4-7295-4b47-a1af-b2f7c30d2f94] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:337: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:285: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-589812 -n embed-certs-589812
start_stop_delete_test.go:285: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-09-04 07:12:29.281270884 +0000 UTC m=+4340.070301512
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context embed-certs-589812 describe po kubernetes-dashboard-855c9754f9-wlwcq -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) kubectl --context embed-certs-589812 describe po kubernetes-dashboard-855c9754f9-wlwcq -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-wlwcq
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             embed-certs-589812/192.168.94.2
Start Time:       Thu, 04 Sep 2025 06:53:55 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-trx94 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-trx94:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  18m                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wlwcq to embed-certs-589812
Normal   Pulling    13m (x5 over 18m)     kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     12m (x5 over 18m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     12m (x5 over 18m)     kubelet            Error: ErrImagePull
Normal   BackOff    3m30s (x48 over 18m)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     2m50s (x51 over 18m)  kubelet            Error: ImagePullBackOff
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context embed-certs-589812 logs kubernetes-dashboard-855c9754f9-wlwcq -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context embed-certs-589812 logs kubernetes-dashboard-855c9754f9-wlwcq -n kubernetes-dashboard: exit status 1 (75.690826ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-wlwcq" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:285: kubectl --context embed-certs-589812 logs kubernetes-dashboard-855c9754f9-wlwcq -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-589812 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-589812
helpers_test.go:243: (dbg) docker inspect embed-certs-589812:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0161e12dd5cfc739425ef831f64b45a895667d74adbd7507e445b97684247a2e",
	        "Created": "2025-09-04T06:52:05.721813416Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1795063,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-04T06:53:38.983357181Z",
	            "FinishedAt": "2025-09-04T06:53:38.236293542Z"
	        },
	        "Image": "sha256:6f7d8b3ae805e64eb4efe058a75d43d384fe5989473cee7f8e24ea90eca28309",
	        "ResolvConfPath": "/var/lib/docker/containers/0161e12dd5cfc739425ef831f64b45a895667d74adbd7507e445b97684247a2e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0161e12dd5cfc739425ef831f64b45a895667d74adbd7507e445b97684247a2e/hostname",
	        "HostsPath": "/var/lib/docker/containers/0161e12dd5cfc739425ef831f64b45a895667d74adbd7507e445b97684247a2e/hosts",
	        "LogPath": "/var/lib/docker/containers/0161e12dd5cfc739425ef831f64b45a895667d74adbd7507e445b97684247a2e/0161e12dd5cfc739425ef831f64b45a895667d74adbd7507e445b97684247a2e-json.log",
	        "Name": "/embed-certs-589812",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-589812:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-589812",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0161e12dd5cfc739425ef831f64b45a895667d74adbd7507e445b97684247a2e",
	                "LowerDir": "/var/lib/docker/overlay2/29b9979564cb53163c731acd557f9ccddda8f5bb35afe526647e9462d37422d8-init/diff:/var/lib/docker/overlay2/00af8677cb60c76ca825d07bd2d1267a5f0b2d8d1147a86a8eb7a1b8e0189af8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/29b9979564cb53163c731acd557f9ccddda8f5bb35afe526647e9462d37422d8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/29b9979564cb53163c731acd557f9ccddda8f5bb35afe526647e9462d37422d8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/29b9979564cb53163c731acd557f9ccddda8f5bb35afe526647e9462d37422d8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-589812",
	                "Source": "/var/lib/docker/volumes/embed-certs-589812/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-589812",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-589812",
	                "name.minikube.sigs.k8s.io": "embed-certs-589812",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9a9c0fa1d4d1c4c114abf8ac3fc5d11d53182a2b8f5b8047ce9e4181a59fe1c1",
	            "SandboxKey": "/var/run/docker/netns/9a9c0fa1d4d1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34274"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34275"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34278"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34276"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34277"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-589812": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:b7:ff:9e:ed:42",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "806214837f28a2edc5791a33bea586453455fab44fad177c8aac833d4001dfed",
	                    "EndpointID": "b4cb6b560accbbaebb5aa4fc48ecc4d80bfd0c24aef0e0f38e6f38c4dc5a258f",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-589812",
	                        "0161e12dd5cf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-589812 -n embed-certs-589812
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-589812 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-589812 logs -n 25: (2.885478884s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │        PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kindnet-444288 sudo systemctl status kubelet --all --full --no-pager                                                                                            │ kindnet-444288        │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │ 04 Sep 25 07:12 UTC │
	│ ssh     │ -p kindnet-444288 sudo systemctl cat kubelet --no-pager                                                                                                            │ kindnet-444288        │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │ 04 Sep 25 07:12 UTC │
	│ ssh     │ -p kindnet-444288 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                             │ kindnet-444288        │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │ 04 Sep 25 07:12 UTC │
	│ ssh     │ -p kindnet-444288 sudo cat /etc/kubernetes/kubelet.conf                                                                                                            │ kindnet-444288        │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │ 04 Sep 25 07:12 UTC │
	│ ssh     │ -p kindnet-444288 sudo cat /var/lib/kubelet/config.yaml                                                                                                            │ kindnet-444288        │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │ 04 Sep 25 07:12 UTC │
	│ ssh     │ -p kindnet-444288 sudo systemctl status docker --all --full --no-pager                                                                                             │ kindnet-444288        │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │                     │
	│ ssh     │ -p kindnet-444288 sudo systemctl cat docker --no-pager                                                                                                             │ kindnet-444288        │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │ 04 Sep 25 07:12 UTC │
	│ ssh     │ -p kindnet-444288 sudo cat /etc/docker/daemon.json                                                                                                                 │ kindnet-444288        │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │                     │
	│ ssh     │ -p kindnet-444288 sudo docker system info                                                                                                                          │ kindnet-444288        │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │                     │
	│ ssh     │ -p kindnet-444288 sudo systemctl status cri-docker --all --full --no-pager                                                                                         │ kindnet-444288        │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │                     │
	│ ssh     │ -p kindnet-444288 sudo systemctl cat cri-docker --no-pager                                                                                                         │ kindnet-444288        │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │ 04 Sep 25 07:12 UTC │
	│ ssh     │ -p kindnet-444288 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                    │ kindnet-444288        │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │                     │
	│ ssh     │ -p kindnet-444288 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                              │ kindnet-444288        │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │ 04 Sep 25 07:12 UTC │
	│ ssh     │ -p kindnet-444288 sudo cri-dockerd --version                                                                                                                       │ kindnet-444288        │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │ 04 Sep 25 07:12 UTC │
	│ ssh     │ -p kindnet-444288 sudo systemctl status containerd --all --full --no-pager                                                                                         │ kindnet-444288        │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │                     │
	│ ssh     │ -p kindnet-444288 sudo systemctl cat containerd --no-pager                                                                                                         │ kindnet-444288        │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │ 04 Sep 25 07:12 UTC │
	│ ssh     │ -p kindnet-444288 sudo cat /lib/systemd/system/containerd.service                                                                                                  │ kindnet-444288        │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │ 04 Sep 25 07:12 UTC │
	│ ssh     │ -p kindnet-444288 sudo cat /etc/containerd/config.toml                                                                                                             │ kindnet-444288        │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │ 04 Sep 25 07:12 UTC │
	│ ssh     │ -p kindnet-444288 sudo containerd config dump                                                                                                                      │ kindnet-444288        │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │ 04 Sep 25 07:12 UTC │
	│ ssh     │ -p kindnet-444288 sudo systemctl status crio --all --full --no-pager                                                                                               │ kindnet-444288        │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │ 04 Sep 25 07:12 UTC │
	│ ssh     │ -p kindnet-444288 sudo systemctl cat crio --no-pager                                                                                                               │ kindnet-444288        │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │ 04 Sep 25 07:12 UTC │
	│ ssh     │ -p kindnet-444288 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                     │ kindnet-444288        │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │ 04 Sep 25 07:12 UTC │
	│ ssh     │ -p kindnet-444288 sudo crio config                                                                                                                                 │ kindnet-444288        │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │ 04 Sep 25 07:12 UTC │
	│ delete  │ -p kindnet-444288                                                                                                                                                  │ kindnet-444288        │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │ 04 Sep 25 07:12 UTC │
	│ start   │ -p custom-flannel-444288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio │ custom-flannel-444288 │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/04 07:12:26
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 07:12:26.934862 1838515 out.go:360] Setting OutFile to fd 1 ...
	I0904 07:12:26.935178 1838515 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 07:12:26.935190 1838515 out.go:374] Setting ErrFile to fd 2...
	I0904 07:12:26.935193 1838515 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 07:12:26.935384 1838515 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1516970/.minikube/bin
	I0904 07:12:26.935973 1838515 out.go:368] Setting JSON to false
	I0904 07:12:26.937195 1838515 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":17697,"bootTime":1756952250,"procs":281,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 07:12:26.937250 1838515 start.go:140] virtualization: kvm guest
	I0904 07:12:26.939307 1838515 out.go:179] * [custom-flannel-444288] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 07:12:26.940666 1838515 notify.go:220] Checking for updates...
	I0904 07:12:26.940699 1838515 out.go:179]   - MINIKUBE_LOCATION=21409
	I0904 07:12:26.942144 1838515 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 07:12:26.943528 1838515 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-1516970/kubeconfig
	I0904 07:12:26.944998 1838515 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1516970/.minikube
	I0904 07:12:26.946377 1838515 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 07:12:26.947741 1838515 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 07:12:26.949437 1838515 config.go:182] Loaded profile config "calico-444288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 07:12:26.949557 1838515 config.go:182] Loaded profile config "default-k8s-diff-port-520775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 07:12:26.949656 1838515 config.go:182] Loaded profile config "embed-certs-589812": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 07:12:26.949776 1838515 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 07:12:26.974982 1838515 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0904 07:12:26.975115 1838515 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 07:12:27.028991 1838515 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-09-04 07:12:27.018378636 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 07:12:27.029129 1838515 docker.go:318] overlay module found
	I0904 07:12:27.031897 1838515 out.go:179] * Using the docker driver based on user configuration
	I0904 07:12:27.033111 1838515 start.go:304] selected driver: docker
	I0904 07:12:27.033132 1838515 start.go:918] validating driver "docker" against <nil>
	I0904 07:12:27.033155 1838515 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 07:12:27.034124 1838515 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 07:12:27.081395 1838515 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-09-04 07:12:27.072685355 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 07:12:27.081582 1838515 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0904 07:12:27.081783 1838515 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 07:12:27.083632 1838515 out.go:179] * Using Docker driver with root privileges
	I0904 07:12:27.084890 1838515 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0904 07:12:27.084917 1838515 start_flags.go:336] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0904 07:12:27.084980 1838515 start.go:348] cluster config:
	{Name:custom-flannel-444288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:custom-flannel-444288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 07:12:27.086416 1838515 out.go:179] * Starting "custom-flannel-444288" primary control-plane node in "custom-flannel-444288" cluster
	I0904 07:12:27.087693 1838515 cache.go:123] Beginning downloading kic base image for docker with crio
	I0904 07:12:27.088886 1838515 out.go:179] * Pulling base image v0.0.47-1756936034-21409 ...
	I0904 07:12:27.090201 1838515 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 07:12:27.090240 1838515 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21409-1516970/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0904 07:12:27.090252 1838515 cache.go:58] Caching tarball of preloaded images
	I0904 07:12:27.090314 1838515 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc in local docker daemon
	I0904 07:12:27.090361 1838515 preload.go:172] Found /home/jenkins/minikube-integration/21409-1516970/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0904 07:12:27.090377 1838515 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0904 07:12:27.090532 1838515 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/custom-flannel-444288/config.json ...
	I0904 07:12:27.090566 1838515 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/custom-flannel-444288/config.json: {Name:mke4f98901c1e71e5b51ea27af4da3cdac728d9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 07:12:27.112123 1838515 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc in local docker daemon, skipping pull
	I0904 07:12:27.112146 1838515 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc exists in daemon, skipping load
	I0904 07:12:27.112162 1838515 cache.go:232] Successfully downloaded all kic artifacts
	I0904 07:12:27.112197 1838515 start.go:360] acquireMachinesLock for custom-flannel-444288: {Name:mk522e5186b88218b708eeb3cc4a5460269527ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 07:12:27.112307 1838515 start.go:364] duration metric: took 90.862µs to acquireMachinesLock for "custom-flannel-444288"
	I0904 07:12:27.112334 1838515 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-444288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:custom-flannel-444288 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0904 07:12:27.112420 1838515 start.go:125] createHost starting for "" (driver="docker")
	W0904 07:12:26.506185 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:12:28.506312 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Sep 04 07:11:09 embed-certs-589812 crio[662]: time="2025-09-04 07:11:09.429992035Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=1297ad9d-cc00-41bc-922d-a0b13b32c900 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:11:16 embed-certs-589812 crio[662]: time="2025-09-04 07:11:16.429025676Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=e1f3f838-1327-4492-b65a-b3b696b36cb6 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:11:16 embed-certs-589812 crio[662]: time="2025-09-04 07:11:16.429258450Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=e1f3f838-1327-4492-b65a-b3b696b36cb6 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:11:22 embed-certs-589812 crio[662]: time="2025-09-04 07:11:22.429163925Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=3acbbe71-b895-4341-94c5-1948297db218 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:11:22 embed-certs-589812 crio[662]: time="2025-09-04 07:11:22.429413843Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=3acbbe71-b895-4341-94c5-1948297db218 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:11:31 embed-certs-589812 crio[662]: time="2025-09-04 07:11:31.429240004Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=45b64670-2f59-4d90-bf27-7cf9597785f9 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:11:31 embed-certs-589812 crio[662]: time="2025-09-04 07:11:31.429483049Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=45b64670-2f59-4d90-bf27-7cf9597785f9 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:11:37 embed-certs-589812 crio[662]: time="2025-09-04 07:11:37.429105372Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=e24a7349-e56d-437e-9b80-18b651009af2 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:11:37 embed-certs-589812 crio[662]: time="2025-09-04 07:11:37.429448455Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=e24a7349-e56d-437e-9b80-18b651009af2 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:11:43 embed-certs-589812 crio[662]: time="2025-09-04 07:11:43.429474923Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=827f8278-5406-45cf-ab2c-3499d65fd5d2 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:11:43 embed-certs-589812 crio[662]: time="2025-09-04 07:11:43.429776281Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=827f8278-5406-45cf-ab2c-3499d65fd5d2 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:11:50 embed-certs-589812 crio[662]: time="2025-09-04 07:11:50.429865023Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=c35ce9bf-4f9b-431c-b330-2096dff8349c name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:11:50 embed-certs-589812 crio[662]: time="2025-09-04 07:11:50.430133156Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=c35ce9bf-4f9b-431c-b330-2096dff8349c name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:11:58 embed-certs-589812 crio[662]: time="2025-09-04 07:11:58.432610785Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=8b8a642f-ff97-445d-bc78-19ca4fdad622 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:11:58 embed-certs-589812 crio[662]: time="2025-09-04 07:11:58.432918283Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=8b8a642f-ff97-445d-bc78-19ca4fdad622 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:12:04 embed-certs-589812 crio[662]: time="2025-09-04 07:12:04.429399394Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=e8418449-ceb4-41d9-850b-824cfa163217 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:12:04 embed-certs-589812 crio[662]: time="2025-09-04 07:12:04.429653197Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=e8418449-ceb4-41d9-850b-824cfa163217 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:12:11 embed-certs-589812 crio[662]: time="2025-09-04 07:12:11.429899439Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=f69fffbc-ac79-4c53-a64a-be32cd9d4f05 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:12:11 embed-certs-589812 crio[662]: time="2025-09-04 07:12:11.430216617Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=f69fffbc-ac79-4c53-a64a-be32cd9d4f05 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:12:16 embed-certs-589812 crio[662]: time="2025-09-04 07:12:16.429537003Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=e65c0059-0cfb-4d02-8354-04469b3154fc name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:12:16 embed-certs-589812 crio[662]: time="2025-09-04 07:12:16.429834473Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=e65c0059-0cfb-4d02-8354-04469b3154fc name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:12:23 embed-certs-589812 crio[662]: time="2025-09-04 07:12:23.429135602Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=40d71d26-e69c-4334-9ef2-0c0b2193c1dd name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:12:23 embed-certs-589812 crio[662]: time="2025-09-04 07:12:23.429415720Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=40d71d26-e69c-4334-9ef2-0c0b2193c1dd name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:12:30 embed-certs-589812 crio[662]: time="2025-09-04 07:12:30.429622821Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=a2f5bc08-ef06-44c6-a25b-e9c894eebbbc name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:12:30 embed-certs-589812 crio[662]: time="2025-09-04 07:12:30.429955167Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=a2f5bc08-ef06-44c6-a25b-e9c894eebbbc name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	3a9e9f9f95a15       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   2 minutes ago       Exited              dashboard-metrics-scraper   8                   8fe1865eb4dd6       dashboard-metrics-scraper-6ffb444bf9-4tbhb
	afa0e6ea9b635       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago      Running             storage-provisioner         2                   15bf762c9c47a       storage-provisioner
	f107300c89141       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c   18 minutes ago      Running             busybox                     1                   a3d76d7c6a35f       busybox
	c522dae6d74af       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   18 minutes ago      Running             coredns                     1                   a4b2a1fb6cf3a       coredns-66bc5c9577-j5gww
	db5784a7ee37e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   18 minutes ago      Running             kindnet-cni                 1                   b71d4bdd51f0f       kindnet-wtgxv
	a36bc9cde6aab       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   18 minutes ago      Running             kube-proxy                  1                   f209ea9e0ae62       kube-proxy-xqvlx
	da3aa45c71a4c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago      Exited              storage-provisioner         1                   15bf762c9c47a       storage-provisioner
	02be4ef72489d       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   18 minutes ago      Running             kube-apiserver              1                   9c852c99349a6       kube-apiserver-embed-certs-589812
	9cafc6f062626       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   18 minutes ago      Running             kube-controller-manager     1                   a26cfa98a53a5       kube-controller-manager-embed-certs-589812
	919de7ee74e8f       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   18 minutes ago      Running             kube-scheduler              1                   f7aae24dad753       kube-scheduler-embed-certs-589812
	136e620f58d0d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   18 minutes ago      Running             etcd                        1                   c455e8a87f36f       etcd-embed-certs-589812
	
	
	==> coredns [c522dae6d74afb1a16f2a235b7bef26ec4cfd05d1b26ea73bc6aa1040ae84643] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53659 - 2854 "HINFO IN 7369246950217003682.7495171770416462074. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.036663369s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-589812
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-589812
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3fa37de45a2901b215fab008201edf72ce5a1ff
	                    minikube.k8s.io/name=embed-certs-589812
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_04T06_52_26_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Sep 2025 06:52:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-589812
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Sep 2025 07:12:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Sep 2025 07:07:57 +0000   Thu, 04 Sep 2025 06:52:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Sep 2025 07:07:57 +0000   Thu, 04 Sep 2025 06:52:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Sep 2025 07:07:57 +0000   Thu, 04 Sep 2025 06:52:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Sep 2025 07:07:57 +0000   Thu, 04 Sep 2025 06:53:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-589812
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 6ae8441598cb47a2ab529a010d5cacbb
	  System UUID:                9cb4d768-9a5f-4c82-9fd6-13f2aad0d14f
	  Boot ID:                    04ef57f1-30be-45a2-b84c-b20b1e806bda
	  Kernel Version:             5.15.0-1083-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-66bc5c9577-j5gww                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     20m
	  kube-system                 etcd-embed-certs-589812                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         20m
	  kube-system                 kindnet-wtgxv                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      20m
	  kube-system                 kube-apiserver-embed-certs-589812             250m (3%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-controller-manager-embed-certs-589812    200m (2%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-proxy-xqvlx                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-scheduler-embed-certs-589812             100m (1%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 metrics-server-746fcd58dc-prlxr               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         19m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-4tbhb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-wlwcq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 19m                kube-proxy       
	  Normal   Starting                 18m                kube-proxy       
	  Normal   Starting                 20m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 20m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node embed-certs-589812 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node embed-certs-589812 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     20m (x8 over 20m)  kubelet          Node embed-certs-589812 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    20m                kubelet          Node embed-certs-589812 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 20m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  20m                kubelet          Node embed-certs-589812 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     20m                kubelet          Node embed-certs-589812 status is now: NodeHasSufficientPID
	  Normal   Starting                 20m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           20m                node-controller  Node embed-certs-589812 event: Registered Node embed-certs-589812 in Controller
	  Normal   NodeReady                19m                kubelet          Node embed-certs-589812 status is now: NodeReady
	  Normal   Starting                 18m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 18m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node embed-certs-589812 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node embed-certs-589812 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m (x8 over 18m)  kubelet          Node embed-certs-589812 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           18m                node-controller  Node embed-certs-589812 event: Registered Node embed-certs-589812 in Controller
	
	
	==> dmesg <==
	[  +0.000001] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +2.011770] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000001] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.000003] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000001] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +1.535866] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-806214837f28
	[  +0.000006] ll header: 00000000: d2 82 15 b6 69 3f aa b7 ff 9e ed 42 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-806214837f28
	[  +0.000001] ll header: 00000000: d2 82 15 b6 69 3f aa b7 ff 9e ed 42 08 00
	[  +0.003918] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-806214837f28
	[  +0.000006] ll header: 00000000: d2 82 15 b6 69 3f aa b7 ff 9e ed 42 08 00
	[  +2.555764] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000006] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.000023] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000004] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000001] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +8.191102] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000008] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.003970] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000005] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000002] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	
	
	==> etcd [136e620f58d0da79daaa7f8118e790ac652690df1da4c027e49d29374f801e1d] <==
	{"level":"warn","ts":"2025-09-04T06:53:48.846971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:48.853922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:48.861180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:48.868166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:48.875139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:48.881856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:48.889957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:48.896348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:48.927041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:48.933824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:48.940567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:48.992225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55996","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-04T07:03:48.236887Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1035}
	{"level":"info","ts":"2025-09-04T07:03:48.255976Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1035,"took":"18.73859ms","hash":1851288839,"current-db-size-bytes":3194880,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1302528,"current-db-size-in-use":"1.3 MB"}
	{"level":"info","ts":"2025-09-04T07:03:48.256050Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1851288839,"revision":1035,"compact-revision":-1}
	{"level":"info","ts":"2025-09-04T07:08:48.241892Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1311}
	{"level":"info","ts":"2025-09-04T07:08:48.244945Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1311,"took":"2.734586ms","hash":2678842265,"current-db-size-bytes":3194880,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1810432,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-09-04T07:08:48.244984Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2678842265,"revision":1311,"compact-revision":1035}
	{"level":"info","ts":"2025-09-04T07:10:06.924030Z","caller":"traceutil/trace.go:172","msg":"trace[2018976243] transaction","detail":"{read_only:false; response_revision:1635; number_of_response:1; }","duration":"118.049663ms","start":"2025-09-04T07:10:06.805961Z","end":"2025-09-04T07:10:06.924011Z","steps":["trace[2018976243] 'process raft request'  (duration: 117.917583ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T07:10:43.183114Z","caller":"traceutil/trace.go:172","msg":"trace[755094258] transaction","detail":"{read_only:false; response_revision:1669; number_of_response:1; }","duration":"110.122358ms","start":"2025-09-04T07:10:43.072972Z","end":"2025-09-04T07:10:43.183095Z","steps":["trace[755094258] 'process raft request'  (duration: 109.982949ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T07:11:47.566708Z","caller":"traceutil/trace.go:172","msg":"trace[826629631] linearizableReadLoop","detail":"{readStateIndex:1983; appliedIndex:1983; }","duration":"114.621054ms","start":"2025-09-04T07:11:47.452066Z","end":"2025-09-04T07:11:47.566687Z","steps":["trace[826629631] 'read index received'  (duration: 114.61348ms)","trace[826629631] 'applied index is now lower than readState.Index'  (duration: 6.562µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-04T07:11:47.566874Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.774042ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-04T07:11:47.566964Z","caller":"traceutil/trace.go:172","msg":"trace[1365426590] range","detail":"{range_begin:/registry/services/specs; range_end:; response_count:0; response_revision:1721; }","duration":"114.897823ms","start":"2025-09-04T07:11:47.452055Z","end":"2025-09-04T07:11:47.566953Z","steps":["trace[1365426590] 'agreement among raft nodes before linearized reading'  (duration: 114.73077ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T07:11:47.566915Z","caller":"traceutil/trace.go:172","msg":"trace[424816492] transaction","detail":"{read_only:false; response_revision:1722; number_of_response:1; }","duration":"126.821763ms","start":"2025-09-04T07:11:47.440079Z","end":"2025-09-04T07:11:47.566900Z","steps":["trace[424816492] 'process raft request'  (duration: 126.692291ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-04T07:11:48.254513Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"125.602878ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571764590306593301 > lease_revoke:<id:5b33991380b709b6>","response":"size:29"}
	
	
	==> kernel <==
	 07:12:32 up  4:55,  0 users,  load average: 2.99, 1.51, 1.39
	Linux embed-certs-589812 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [db5784a7ee37e8c68ba772498e333580e694587cb505ba865c6ea871e108f5a1] <==
	I0904 07:10:31.907366       1 main.go:301] handling current node
	I0904 07:10:41.906823       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0904 07:10:41.906864       1 main.go:301] handling current node
	I0904 07:10:51.906960       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0904 07:10:51.907020       1 main.go:301] handling current node
	I0904 07:11:01.908713       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0904 07:11:01.908752       1 main.go:301] handling current node
	I0904 07:11:11.911888       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0904 07:11:11.911921       1 main.go:301] handling current node
	I0904 07:11:21.906529       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0904 07:11:21.906585       1 main.go:301] handling current node
	I0904 07:11:31.906194       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0904 07:11:31.906225       1 main.go:301] handling current node
	I0904 07:11:41.911887       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0904 07:11:41.911922       1 main.go:301] handling current node
	I0904 07:11:51.911929       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0904 07:11:51.911970       1 main.go:301] handling current node
	I0904 07:12:01.907946       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0904 07:12:01.907990       1 main.go:301] handling current node
	I0904 07:12:11.915876       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0904 07:12:11.915912       1 main.go:301] handling current node
	I0904 07:12:21.906456       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0904 07:12:21.906504       1 main.go:301] handling current node
	I0904 07:12:31.906584       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0904 07:12:31.906638       1 main.go:301] handling current node
	
	
	==> kube-apiserver [02be4ef72489d4392f911e0670f92eed06830e855080845246dde88d6a655eb3] <==
	 > logger="UnhandledError"
	I0904 07:08:50.729202       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0904 07:09:40.019372       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 07:09:44.392030       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0904 07:09:50.728279       1 handler_proxy.go:99] no RequestInfo found in the context
	E0904 07:09:50.728329       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0904 07:09:50.728344       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0904 07:09:50.729396       1 handler_proxy.go:99] no RequestInfo found in the context
	E0904 07:09:50.729475       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0904 07:09:50.729488       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0904 07:10:40.147890       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 07:11:14.012899       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0904 07:11:50.729221       1 handler_proxy.go:99] no RequestInfo found in the context
	E0904 07:11:50.729270       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0904 07:11:50.729285       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0904 07:11:50.730415       1 handler_proxy.go:99] no RequestInfo found in the context
	E0904 07:11:50.730516       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0904 07:11:50.730532       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0904 07:12:04.880068       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [9cafc6f06262606529257c56da917e67d347655c38b404f7c4cdc000c6f4a852] <==
	I0904 07:06:25.201090       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:06:55.137667       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:06:55.208557       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:07:25.141844       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:07:25.215547       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:07:55.146819       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:07:55.222951       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:08:25.151623       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:08:25.229990       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:08:55.156079       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:08:55.236708       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:09:25.160504       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:09:25.243689       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:09:55.166190       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:09:55.251730       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:10:25.171097       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:10:25.260235       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:10:55.175979       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:10:55.268786       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:11:25.180416       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:11:25.276063       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:11:55.185293       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:11:55.283623       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:12:25.190251       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:12:25.290177       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [a36bc9cde6aab4b8aa2805106724a69da61f56fe5d00554c661d19d13a4f6b93] <==
	I0904 06:53:51.725383       1 server_linux.go:53] "Using iptables proxy"
	I0904 06:53:51.861351       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0904 06:53:51.962353       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0904 06:53:51.962391       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E0904 06:53:51.962509       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0904 06:53:52.102819       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0904 06:53:52.102882       1 server_linux.go:132] "Using iptables Proxier"
	I0904 06:53:52.107255       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0904 06:53:52.107605       1 server.go:527] "Version info" version="v1.34.0"
	I0904 06:53:52.107632       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 06:53:52.110555       1 config.go:200] "Starting service config controller"
	I0904 06:53:52.110579       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0904 06:53:52.110600       1 config.go:106] "Starting endpoint slice config controller"
	I0904 06:53:52.110615       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0904 06:53:52.110635       1 config.go:403] "Starting serviceCIDR config controller"
	I0904 06:53:52.110640       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0904 06:53:52.110651       1 config.go:309] "Starting node config controller"
	I0904 06:53:52.110662       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0904 06:53:52.110669       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0904 06:53:52.211391       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0904 06:53:52.211437       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0904 06:53:52.211436       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [919de7ee74e8fee36f9de7bc074a0b27a2912e590e7d25095502ed862ce411a3] <==
	I0904 06:53:47.429971       1 serving.go:386] Generated self-signed cert in-memory
	W0904 06:53:49.700217       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0904 06:53:49.700386       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found]
	W0904 06:53:49.700434       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0904 06:53:49.700470       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0904 06:53:49.802079       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0904 06:53:49.802126       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 06:53:49.807337       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 06:53:49.807484       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 06:53:49.808677       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0904 06:53:49.809017       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0904 06:53:49.927006       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 04 07:11:45 embed-certs-589812 kubelet[811]: E0904 07:11:45.580527     811 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756969905580203837  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 07:11:45 embed-certs-589812 kubelet[811]: E0904 07:11:45.580579     811 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756969905580203837  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 07:11:47 embed-certs-589812 kubelet[811]: I0904 07:11:47.429423     811 scope.go:117] "RemoveContainer" containerID="3a9e9f9f95a15d3a0acb0f861e52fcfbb4356461f195c4ace3f877fb91e7bf1f"
	Sep 04 07:11:47 embed-certs-589812 kubelet[811]: E0904 07:11:47.429676     811 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4tbhb_kubernetes-dashboard(f3c95b95-bd44-4fd4-8e19-a2d916fa0f62)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4tbhb" podUID="f3c95b95-bd44-4fd4-8e19-a2d916fa0f62"
	Sep 04 07:11:50 embed-certs-589812 kubelet[811]: E0904 07:11:50.430481     811 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wlwcq" podUID="ddf273f4-7295-4b47-a1af-b2f7c30d2f94"
	Sep 04 07:11:55 embed-certs-589812 kubelet[811]: E0904 07:11:55.582018     811 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756969915581762938  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 07:11:55 embed-certs-589812 kubelet[811]: E0904 07:11:55.582075     811 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756969915581762938  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 07:11:58 embed-certs-589812 kubelet[811]: I0904 07:11:58.429120     811 scope.go:117] "RemoveContainer" containerID="3a9e9f9f95a15d3a0acb0f861e52fcfbb4356461f195c4ace3f877fb91e7bf1f"
	Sep 04 07:11:58 embed-certs-589812 kubelet[811]: E0904 07:11:58.429704     811 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4tbhb_kubernetes-dashboard(f3c95b95-bd44-4fd4-8e19-a2d916fa0f62)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4tbhb" podUID="f3c95b95-bd44-4fd4-8e19-a2d916fa0f62"
	Sep 04 07:11:58 embed-certs-589812 kubelet[811]: E0904 07:11:58.433239     811 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-prlxr" podUID="58b70501-6011-4b99-80ff-1f9b422ae481"
	Sep 04 07:12:04 embed-certs-589812 kubelet[811]: E0904 07:12:04.430056     811 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wlwcq" podUID="ddf273f4-7295-4b47-a1af-b2f7c30d2f94"
	Sep 04 07:12:05 embed-certs-589812 kubelet[811]: E0904 07:12:05.583556     811 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756969925583258295  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 07:12:05 embed-certs-589812 kubelet[811]: E0904 07:12:05.583601     811 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756969925583258295  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 07:12:11 embed-certs-589812 kubelet[811]: E0904 07:12:11.430508     811 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-prlxr" podUID="58b70501-6011-4b99-80ff-1f9b422ae481"
	Sep 04 07:12:12 embed-certs-589812 kubelet[811]: I0904 07:12:12.429574     811 scope.go:117] "RemoveContainer" containerID="3a9e9f9f95a15d3a0acb0f861e52fcfbb4356461f195c4ace3f877fb91e7bf1f"
	Sep 04 07:12:12 embed-certs-589812 kubelet[811]: E0904 07:12:12.429765     811 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4tbhb_kubernetes-dashboard(f3c95b95-bd44-4fd4-8e19-a2d916fa0f62)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4tbhb" podUID="f3c95b95-bd44-4fd4-8e19-a2d916fa0f62"
	Sep 04 07:12:15 embed-certs-589812 kubelet[811]: E0904 07:12:15.585454     811 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756969935585136923  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 07:12:15 embed-certs-589812 kubelet[811]: E0904 07:12:15.585506     811 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756969935585136923  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 07:12:16 embed-certs-589812 kubelet[811]: E0904 07:12:16.430221     811 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wlwcq" podUID="ddf273f4-7295-4b47-a1af-b2f7c30d2f94"
	Sep 04 07:12:23 embed-certs-589812 kubelet[811]: E0904 07:12:23.429747     811 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-prlxr" podUID="58b70501-6011-4b99-80ff-1f9b422ae481"
	Sep 04 07:12:25 embed-certs-589812 kubelet[811]: I0904 07:12:25.430146     811 scope.go:117] "RemoveContainer" containerID="3a9e9f9f95a15d3a0acb0f861e52fcfbb4356461f195c4ace3f877fb91e7bf1f"
	Sep 04 07:12:25 embed-certs-589812 kubelet[811]: E0904 07:12:25.430374     811 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4tbhb_kubernetes-dashboard(f3c95b95-bd44-4fd4-8e19-a2d916fa0f62)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4tbhb" podUID="f3c95b95-bd44-4fd4-8e19-a2d916fa0f62"
	Sep 04 07:12:25 embed-certs-589812 kubelet[811]: E0904 07:12:25.586831     811 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756969945586519139  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 07:12:25 embed-certs-589812 kubelet[811]: E0904 07:12:25.586864     811 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756969945586519139  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 07:12:30 embed-certs-589812 kubelet[811]: E0904 07:12:30.430267     811 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wlwcq" podUID="ddf273f4-7295-4b47-a1af-b2f7c30d2f94"
	
	
	==> storage-provisioner [afa0e6ea9b635b90ae3047ad7a9771161aceb849079022cb4f3aa360b0ae3853] <==
	W0904 07:12:07.648565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:09.652053       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:09.657022       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:11.659731       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:11.663635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:13.666131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:13.671216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:15.673784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:15.678079       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:17.681351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:17.685477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:19.689001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:19.694310       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:21.697115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:21.705032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:23.708138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:23.712146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:25.715603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:25.719388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:27.722430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:27.727258       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:29.731169       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:29.735317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:31.738585       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:31.771376       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [da3aa45c71a4c394a688ba0cada3665a08c23e51e587d98fad20c6d189740263] <==
	I0904 06:53:51.421931       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0904 06:54:21.424162       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-589812 -n embed-certs-589812
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-589812 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-prlxr kubernetes-dashboard-855c9754f9-wlwcq
helpers_test.go:282: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context embed-certs-589812 describe pod metrics-server-746fcd58dc-prlxr kubernetes-dashboard-855c9754f9-wlwcq
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context embed-certs-589812 describe pod metrics-server-746fcd58dc-prlxr kubernetes-dashboard-855c9754f9-wlwcq: exit status 1 (63.180751ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-prlxr" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-wlwcq" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context embed-certs-589812 describe pod metrics-server-746fcd58dc-prlxr kubernetes-dashboard-855c9754f9-wlwcq: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (544.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (544.7s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-f6t79" [c1e25916-a16a-4ee2-9aaa-895d41ffbe6e] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0904 07:04:31.715348 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/functional-856205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:07:57.336346 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:09:31.715385 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/functional-856205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:285: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-520775 -n default-k8s-diff-port-520775
start_stop_delete_test.go:285: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-09-04 07:12:40.992123576 +0000 UTC m=+4351.781154203
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-520775 describe po kubernetes-dashboard-855c9754f9-f6t79 -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) kubectl --context default-k8s-diff-port-520775 describe po kubernetes-dashboard-855c9754f9-f6t79 -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-f6t79
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             default-k8s-diff-port-520775/192.168.103.2
Start Time:       Thu, 04 Sep 2025 06:54:05 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5xrlz (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-5xrlz:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  18m                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-f6t79 to default-k8s-diff-port-520775
Normal   Pulling    13m (x5 over 18m)     kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     13m (x5 over 18m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     13m (x5 over 18m)     kubelet            Error: ErrImagePull
Normal   BackOff    3m26s (x49 over 18m)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     3m1s (x51 over 18m)   kubelet            Error: ImagePullBackOff
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-520775 logs kubernetes-dashboard-855c9754f9-f6t79 -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-520775 logs kubernetes-dashboard-855c9754f9-f6t79 -n kubernetes-dashboard: exit status 1 (78.496589ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-f6t79" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:285: kubectl --context default-k8s-diff-port-520775 logs kubernetes-dashboard-855c9754f9-f6t79 -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-520775 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-520775
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-520775:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "172df401119a1d42b8eb3beca2bdd06da22727a5a8bc01483c7bd45b54a9585b",
	        "Created": "2025-09-04T06:52:50.464909498Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1797115,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-04T06:53:49.68372136Z",
	            "FinishedAt": "2025-09-04T06:53:48.816578784Z"
	        },
	        "Image": "sha256:6f7d8b3ae805e64eb4efe058a75d43d384fe5989473cee7f8e24ea90eca28309",
	        "ResolvConfPath": "/var/lib/docker/containers/172df401119a1d42b8eb3beca2bdd06da22727a5a8bc01483c7bd45b54a9585b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/172df401119a1d42b8eb3beca2bdd06da22727a5a8bc01483c7bd45b54a9585b/hostname",
	        "HostsPath": "/var/lib/docker/containers/172df401119a1d42b8eb3beca2bdd06da22727a5a8bc01483c7bd45b54a9585b/hosts",
	        "LogPath": "/var/lib/docker/containers/172df401119a1d42b8eb3beca2bdd06da22727a5a8bc01483c7bd45b54a9585b/172df401119a1d42b8eb3beca2bdd06da22727a5a8bc01483c7bd45b54a9585b-json.log",
	        "Name": "/default-k8s-diff-port-520775",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-520775:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-520775",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "172df401119a1d42b8eb3beca2bdd06da22727a5a8bc01483c7bd45b54a9585b",
	                "LowerDir": "/var/lib/docker/overlay2/5e09d1bda7a40a6f708c59900f6a849375301dbcff052f63e4d5f72ca87335fc-init/diff:/var/lib/docker/overlay2/00af8677cb60c76ca825d07bd2d1267a5f0b2d8d1147a86a8eb7a1b8e0189af8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5e09d1bda7a40a6f708c59900f6a849375301dbcff052f63e4d5f72ca87335fc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5e09d1bda7a40a6f708c59900f6a849375301dbcff052f63e4d5f72ca87335fc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5e09d1bda7a40a6f708c59900f6a849375301dbcff052f63e4d5f72ca87335fc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-520775",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-520775/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-520775",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-520775",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-520775",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a0bd7abb2a2334b072b79979d645221e469a509371e8a05103678f543cac4ce5",
	            "SandboxKey": "/var/run/docker/netns/a0bd7abb2a23",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34279"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34280"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34283"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34281"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34282"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-520775": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:69:a6:d0:fa:c5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1e6b099093f4d2bc50dc9a105202a4f66367015ccdbff2e4084d5a24df38669d",
	                    "EndpointID": "ecbf99d262604c276b491ddb13ca849ee24efef7e85cb28c75d854b4b7cd0be3",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-520775",
	                        "172df401119a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-520775 -n default-k8s-diff-port-520775
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-520775 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-520775 logs -n 25: (3.187528419s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kindnet-444288 sudo systemctl cat docker --no-pager                                                                                                             │ kindnet-444288            │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │ 04 Sep 25 07:12 UTC │
	│ ssh     │ -p kindnet-444288 sudo cat /etc/docker/daemon.json                                                                                                                 │ kindnet-444288            │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │                     │
	│ ssh     │ -p kindnet-444288 sudo docker system info                                                                                                                          │ kindnet-444288            │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │                     │
	│ ssh     │ -p kindnet-444288 sudo systemctl status cri-docker --all --full --no-pager                                                                                         │ kindnet-444288            │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │                     │
	│ ssh     │ -p kindnet-444288 sudo systemctl cat cri-docker --no-pager                                                                                                         │ kindnet-444288            │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │ 04 Sep 25 07:12 UTC │
	│ ssh     │ -p kindnet-444288 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                    │ kindnet-444288            │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │                     │
	│ ssh     │ -p kindnet-444288 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                              │ kindnet-444288            │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │ 04 Sep 25 07:12 UTC │
	│ ssh     │ -p kindnet-444288 sudo cri-dockerd --version                                                                                                                       │ kindnet-444288            │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │ 04 Sep 25 07:12 UTC │
	│ ssh     │ -p kindnet-444288 sudo systemctl status containerd --all --full --no-pager                                                                                         │ kindnet-444288            │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │                     │
	│ ssh     │ -p kindnet-444288 sudo systemctl cat containerd --no-pager                                                                                                         │ kindnet-444288            │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │ 04 Sep 25 07:12 UTC │
	│ ssh     │ -p kindnet-444288 sudo cat /lib/systemd/system/containerd.service                                                                                                  │ kindnet-444288            │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │ 04 Sep 25 07:12 UTC │
	│ ssh     │ -p kindnet-444288 sudo cat /etc/containerd/config.toml                                                                                                             │ kindnet-444288            │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │ 04 Sep 25 07:12 UTC │
	│ ssh     │ -p kindnet-444288 sudo containerd config dump                                                                                                                      │ kindnet-444288            │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │ 04 Sep 25 07:12 UTC │
	│ ssh     │ -p kindnet-444288 sudo systemctl status crio --all --full --no-pager                                                                                               │ kindnet-444288            │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │ 04 Sep 25 07:12 UTC │
	│ ssh     │ -p kindnet-444288 sudo systemctl cat crio --no-pager                                                                                                               │ kindnet-444288            │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │ 04 Sep 25 07:12 UTC │
	│ ssh     │ -p kindnet-444288 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                     │ kindnet-444288            │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │ 04 Sep 25 07:12 UTC │
	│ ssh     │ -p kindnet-444288 sudo crio config                                                                                                                                 │ kindnet-444288            │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │ 04 Sep 25 07:12 UTC │
	│ delete  │ -p kindnet-444288                                                                                                                                                  │ kindnet-444288            │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │ 04 Sep 25 07:12 UTC │
	│ start   │ -p custom-flannel-444288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio │ custom-flannel-444288     │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │                     │
	│ image   │ embed-certs-589812 image list --format=json                                                                                                                        │ embed-certs-589812        │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │ 04 Sep 25 07:12 UTC │
	│ pause   │ -p embed-certs-589812 --alsologtostderr -v=1                                                                                                                       │ embed-certs-589812        │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │ 04 Sep 25 07:12 UTC │
	│ unpause │ -p embed-certs-589812 --alsologtostderr -v=1                                                                                                                       │ embed-certs-589812        │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │ 04 Sep 25 07:12 UTC │
	│ delete  │ -p embed-certs-589812                                                                                                                                              │ embed-certs-589812        │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │ 04 Sep 25 07:12 UTC │
	│ delete  │ -p embed-certs-589812                                                                                                                                              │ embed-certs-589812        │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │ 04 Sep 25 07:12 UTC │
	│ start   │ -p enable-default-cni-444288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio    │ enable-default-cni-444288 │ jenkins │ v1.36.0 │ 04 Sep 25 07:12 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/04 07:12:38
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 07:12:38.850200 1842788 out.go:360] Setting OutFile to fd 1 ...
	I0904 07:12:38.850316 1842788 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 07:12:38.850326 1842788 out.go:374] Setting ErrFile to fd 2...
	I0904 07:12:38.850332 1842788 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 07:12:38.850595 1842788 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1516970/.minikube/bin
	I0904 07:12:38.851329 1842788 out.go:368] Setting JSON to false
	I0904 07:12:38.852799 1842788 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":17709,"bootTime":1756952250,"procs":275,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 07:12:38.852886 1842788 start.go:140] virtualization: kvm guest
	I0904 07:12:38.854791 1842788 out.go:179] * [enable-default-cni-444288] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 07:12:38.856352 1842788 out.go:179]   - MINIKUBE_LOCATION=21409
	I0904 07:12:38.856406 1842788 notify.go:220] Checking for updates...
	I0904 07:12:38.859411 1842788 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 07:12:38.860790 1842788 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-1516970/kubeconfig
	I0904 07:12:38.861936 1842788 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1516970/.minikube
	I0904 07:12:38.863222 1842788 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 07:12:38.864595 1842788 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 07:12:38.866481 1842788 config.go:182] Loaded profile config "calico-444288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 07:12:38.866644 1842788 config.go:182] Loaded profile config "custom-flannel-444288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 07:12:38.866777 1842788 config.go:182] Loaded profile config "default-k8s-diff-port-520775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 07:12:38.866924 1842788 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 07:12:38.890630 1842788 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0904 07:12:38.890742 1842788 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 07:12:38.942816 1842788 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:74 SystemTime:2025-09-04 07:12:38.933381587 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 07:12:38.943005 1842788 docker.go:318] overlay module found
	I0904 07:12:38.944822 1842788 out.go:179] * Using the docker driver based on user configuration
	I0904 07:12:38.945948 1842788 start.go:304] selected driver: docker
	I0904 07:12:38.945964 1842788 start.go:918] validating driver "docker" against <nil>
	I0904 07:12:38.945979 1842788 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 07:12:38.947146 1842788 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 07:12:39.001049 1842788 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:74 SystemTime:2025-09-04 07:12:38.990840785 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 07:12:39.001274 1842788 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	E0904 07:12:39.001510 1842788 start_flags.go:481] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0904 07:12:39.001544 1842788 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 07:12:39.003402 1842788 out.go:179] * Using Docker driver with root privileges
	I0904 07:12:39.004687 1842788 cni.go:84] Creating CNI manager for "bridge"
	I0904 07:12:39.004711 1842788 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0904 07:12:39.004817 1842788 start.go:348] cluster config:
	{Name:enable-default-cni-444288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:enable-default-cni-444288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 07:12:39.006394 1842788 out.go:179] * Starting "enable-default-cni-444288" primary control-plane node in "enable-default-cni-444288" cluster
	I0904 07:12:39.007640 1842788 cache.go:123] Beginning downloading kic base image for docker with crio
	I0904 07:12:39.008955 1842788 out.go:179] * Pulling base image v0.0.47-1756936034-21409 ...
	I0904 07:12:39.010238 1842788 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 07:12:39.010291 1842788 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21409-1516970/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0904 07:12:39.010304 1842788 cache.go:58] Caching tarball of preloaded images
	I0904 07:12:39.010372 1842788 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc in local docker daemon
	I0904 07:12:39.010393 1842788 preload.go:172] Found /home/jenkins/minikube-integration/21409-1516970/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0904 07:12:39.010404 1842788 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0904 07:12:39.010520 1842788 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/enable-default-cni-444288/config.json ...
	I0904 07:12:39.010548 1842788 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/enable-default-cni-444288/config.json: {Name:mke340a9f8a36ba356e632829fe145fb8ae03979 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 07:12:39.032795 1842788 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc in local docker daemon, skipping pull
	I0904 07:12:39.032820 1842788 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc exists in daemon, skipping load
	I0904 07:12:39.032838 1842788 cache.go:232] Successfully downloaded all kic artifacts
	I0904 07:12:39.032863 1842788 start.go:360] acquireMachinesLock for enable-default-cni-444288: {Name:mk2dfa14af821d8aaf33c80cb98163421359d270 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 07:12:39.032973 1842788 start.go:364] duration metric: took 89.883µs to acquireMachinesLock for "enable-default-cni-444288"
	I0904 07:12:39.033000 1842788 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-444288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:enable-default-cni-444288 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0904 07:12:39.033066 1842788 start.go:125] createHost starting for "" (driver="docker")
	W0904 07:12:35.005947 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:12:37.505568 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Sep 04 07:11:20 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:11:20.944093720Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=e8144645-b253-4925-a404-4c6a65865e15 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:11:24 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:11:24.943714081Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=0a45e287-751f-4fee-85ed-76db71f1b245 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:11:24 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:11:24.943981864Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=0a45e287-751f-4fee-85ed-76db71f1b245 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:11:31 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:11:31.943701867Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=8772de62-acad-4db6-84e4-acee468df045 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:11:31 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:11:31.944067979Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=8772de62-acad-4db6-84e4-acee468df045 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:11:39 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:11:39.944322693Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=225f3335-3264-4fe7-9be1-99f7207e18fa name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:11:39 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:11:39.944612365Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=225f3335-3264-4fe7-9be1-99f7207e18fa name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:11:43 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:11:43.943900468Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=398806be-8db8-4149-b510-efa44b545600 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:11:43 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:11:43.944131936Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=398806be-8db8-4149-b510-efa44b545600 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:11:53 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:11:53.945555344Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=8ba5eb54-003d-40c5-8f82-c8bd84ebeb28 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:11:53 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:11:53.945758000Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=8ba5eb54-003d-40c5-8f82-c8bd84ebeb28 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:11:57 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:11:57.944351203Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=ae6ad8f8-f13b-43b6-91fb-2f6e1b5a2b38 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:11:57 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:11:57.944726017Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=ae6ad8f8-f13b-43b6-91fb-2f6e1b5a2b38 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:12:06 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:12:06.944534782Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=39040436-8f58-458e-82e5-e9b1bdd77e06 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:12:06 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:12:06.944777192Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=39040436-8f58-458e-82e5-e9b1bdd77e06 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:12:08 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:12:08.943894967Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=60788813-182f-4c98-a2b9-8ccf4d41c2eb name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:12:08 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:12:08.944243786Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=60788813-182f-4c98-a2b9-8ccf4d41c2eb name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:12:20 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:12:20.944266235Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=4a4197dc-510d-4dff-8fd6-f7f995feda40 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:12:20 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:12:20.944559282Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=4a4197dc-510d-4dff-8fd6-f7f995feda40 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:12:21 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:12:21.943911285Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=68e85fbb-e7c1-43f7-bc21-75494d6f1314 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:12:21 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:12:21.944226364Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=68e85fbb-e7c1-43f7-bc21-75494d6f1314 name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:12:32 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:12:32.944057021Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=2a3dcd72-ccb9-4ef6-af89-079d869b082d name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:12:32 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:12:32.944359214Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=2a3dcd72-ccb9-4ef6-af89-079d869b082d name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:12:35 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:12:35.945078235Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=b3d61a29-49e2-4026-ba77-9ab6b811a5ec name=/runtime.v1.ImageService/ImageStatus
	Sep 04 07:12:35 default-k8s-diff-port-520775 crio[660]: time="2025-09-04 07:12:35.945376194Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=b3d61a29-49e2-4026-ba77-9ab6b811a5ec name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	37e77ce3e0901       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   2 minutes ago       Exited              dashboard-metrics-scraper   8                   858e1880ccc57       dashboard-metrics-scraper-6ffb444bf9-w8cp6
	651dd18c7303d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago      Running             storage-provisioner         2                   37eaa3ccd86c8       storage-provisioner
	8dd957a92d643       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c   18 minutes ago      Running             busybox                     1                   2acea6d36b1ec       busybox
	12896fe744d8a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   18 minutes ago      Running             coredns                     1                   02869ebade6ec       coredns-66bc5c9577-hm47q
	177f8ea7a363c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago      Exited              storage-provisioner         1                   37eaa3ccd86c8       storage-provisioner
	67fd4be4663f3       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   18 minutes ago      Running             kindnet-cni                 1                   3083df93634b4       kindnet-wz7lg
	0cb99392ff213       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   18 minutes ago      Running             kube-proxy                  1                   9b45d085cf127       kube-proxy-zrlrh
	1def9424a8c38       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   18 minutes ago      Running             kube-scheduler              1                   66edf7d7784d8       kube-scheduler-default-k8s-diff-port-520775
	b657ea960e3b6       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   18 minutes ago      Running             kube-apiserver              1                   b194f0de9e5a8       kube-apiserver-default-k8s-diff-port-520775
	c3ca0bd7fce1d       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   18 minutes ago      Running             kube-controller-manager     1                   4bfc430eb71cd       kube-controller-manager-default-k8s-diff-port-520775
	fe9e18633ad68       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   18 minutes ago      Running             etcd                        1                   c5248de068b11       etcd-default-k8s-diff-port-520775
	
	
	==> coredns [12896fe744d8a440ab362f6ae7d00d19681e226f2e50d29a6a3e061bc755d6a0] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53125 - 56639 "HINFO IN 505985679635397038.1776096097812087659. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.020458499s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-520775
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-520775
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3fa37de45a2901b215fab008201edf72ce5a1ff
	                    minikube.k8s.io/name=default-k8s-diff-port-520775
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_04T06_53_08_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Sep 2025 06:53:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-520775
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Sep 2025 07:12:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Sep 2025 07:12:42 +0000   Thu, 04 Sep 2025 06:53:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Sep 2025 07:12:42 +0000   Thu, 04 Sep 2025 06:53:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Sep 2025 07:12:42 +0000   Thu, 04 Sep 2025 06:53:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Sep 2025 07:12:42 +0000   Thu, 04 Sep 2025 06:53:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-520775
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 f2cc88600c8c4e1c895ddae82a9d3dfe
	  System UUID:                17e666a0-ae84-4286-9b81-3776014bb3a5
	  Boot ID:                    04ef57f1-30be-45a2-b84c-b20b1e806bda
	  Kernel Version:             5.15.0-1083-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-66bc5c9577-hm47q                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     19m
	  kube-system                 etcd-default-k8s-diff-port-520775                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         19m
	  kube-system                 kindnet-wz7lg                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-default-k8s-diff-port-520775             250m (3%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-520775    200m (2%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-zrlrh                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-default-k8s-diff-port-520775             100m (1%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 metrics-server-746fcd58dc-gws8j                         100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         19m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-w8cp6              0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-f6t79                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 19m                kube-proxy       
	  Normal   Starting                 18m                kube-proxy       
	  Normal   NodeHasSufficientPID     19m                kubelet          Node default-k8s-diff-port-520775 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 19m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  19m                kubelet          Node default-k8s-diff-port-520775 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19m                kubelet          Node default-k8s-diff-port-520775 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 19m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           19m                node-controller  Node default-k8s-diff-port-520775 event: Registered Node default-k8s-diff-port-520775 in Controller
	  Normal   NodeReady                19m                kubelet          Node default-k8s-diff-port-520775 status is now: NodeReady
	  Normal   Starting                 18m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 18m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-520775 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-520775 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-520775 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           18m                node-controller  Node default-k8s-diff-port-520775 event: Registered Node default-k8s-diff-port-520775 in Controller
	
	
	==> dmesg <==
	[  +0.000001] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +2.011770] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000001] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.000003] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000001] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +1.535866] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-806214837f28
	[  +0.000006] ll header: 00000000: d2 82 15 b6 69 3f aa b7 ff 9e ed 42 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-806214837f28
	[  +0.000001] ll header: 00000000: d2 82 15 b6 69 3f aa b7 ff 9e ed 42 08 00
	[  +0.003918] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-806214837f28
	[  +0.000006] ll header: 00000000: d2 82 15 b6 69 3f aa b7 ff 9e ed 42 08 00
	[  +2.555764] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000006] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.000023] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000004] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000001] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +8.191102] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000008] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.003970] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000005] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-1e6b099093f4
	[  +0.000002] ll header: 00000000: 9a 66 db b3 52 6d 62 69 a6 d0 fa c5 08 00
	
	
	==> etcd [fe9e18633ad685a5e18223d4de6fa0bd95b9ff7a556105fd4cc0b9449f68f31c] <==
	{"level":"warn","ts":"2025-09-04T06:53:59.248025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:59.257756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:59.265112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:59.274577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:59.299986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:59.308112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:59.319642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:59.325226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:59.332851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:59.355962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:59.401215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:59.408770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:59.438675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:59.445447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:59.454772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T06:53:59.507091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45634","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-04T07:03:58.622907Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":983}
	{"level":"info","ts":"2025-09-04T07:03:58.629178Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":983,"took":"5.952558ms","hash":3630892510,"current-db-size-bytes":3149824,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":3149824,"current-db-size-in-use":"3.1 MB"}
	{"level":"info","ts":"2025-09-04T07:03:58.629220Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3630892510,"revision":983,"compact-revision":-1}
	{"level":"info","ts":"2025-09-04T07:08:58.628886Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1261}
	{"level":"info","ts":"2025-09-04T07:08:58.631518Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1261,"took":"2.367782ms","hash":4005203377,"current-db-size-bytes":3149824,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":1875968,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-09-04T07:08:58.631556Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":4005203377,"revision":1261,"compact-revision":983}
	{"level":"info","ts":"2025-09-04T07:10:06.394894Z","caller":"traceutil/trace.go:172","msg":"trace[133361936] transaction","detail":"{read_only:false; response_revision:1577; number_of_response:1; }","duration":"116.956073ms","start":"2025-09-04T07:10:06.277914Z","end":"2025-09-04T07:10:06.394870Z","steps":["trace[133361936] 'process raft request'  (duration: 116.811738ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T07:12:25.179465Z","caller":"traceutil/trace.go:172","msg":"trace[1267645652] transaction","detail":"{read_only:false; response_revision:1695; number_of_response:1; }","duration":"120.965098ms","start":"2025-09-04T07:12:25.058479Z","end":"2025-09-04T07:12:25.179444Z","steps":["trace[1267645652] 'process raft request'  (duration: 120.813511ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T07:12:31.360944Z","caller":"traceutil/trace.go:172","msg":"trace[442970347] transaction","detail":"{read_only:false; response_revision:1699; number_of_response:1; }","duration":"146.246394ms","start":"2025-09-04T07:12:31.214675Z","end":"2025-09-04T07:12:31.360921Z","steps":["trace[442970347] 'process raft request'  (duration: 145.538084ms)"],"step_count":1}
	
	
	==> kernel <==
	 07:12:44 up  4:55,  0 users,  load average: 3.14, 1.59, 1.42
	Linux default-k8s-diff-port-520775 5.15.0-1083-gcp #92~20.04.1-Ubuntu SMP Tue Apr 29 09:12:55 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [67fd4be4663f399a5ab71bec17ea18252f8bdac63c94a8b38f9892bedf5e6ebd] <==
	I0904 07:10:42.510789       1 main.go:301] handling current node
	I0904 07:10:52.509955       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0904 07:10:52.509999       1 main.go:301] handling current node
	I0904 07:11:02.510085       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0904 07:11:02.510122       1 main.go:301] handling current node
	I0904 07:11:12.516871       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0904 07:11:12.516915       1 main.go:301] handling current node
	I0904 07:11:22.517389       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0904 07:11:22.517455       1 main.go:301] handling current node
	I0904 07:11:32.510120       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0904 07:11:32.510157       1 main.go:301] handling current node
	I0904 07:11:42.510130       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0904 07:11:42.510170       1 main.go:301] handling current node
	I0904 07:11:52.517609       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0904 07:11:52.517656       1 main.go:301] handling current node
	I0904 07:12:02.517568       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0904 07:12:02.517704       1 main.go:301] handling current node
	I0904 07:12:12.510670       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0904 07:12:12.510713       1 main.go:301] handling current node
	I0904 07:12:22.517866       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0904 07:12:22.517901       1 main.go:301] handling current node
	I0904 07:12:32.509915       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0904 07:12:32.509948       1 main.go:301] handling current node
	I0904 07:12:42.509909       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0904 07:12:42.509961       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b657ea960e3b6bcf1c194db3a320f280623b711353707a906b9aa137fbb3678d] <==
	I0904 07:09:01.242629       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0904 07:09:09.316994       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 07:09:55.257007       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0904 07:10:01.241651       1 handler_proxy.go:99] no RequestInfo found in the context
	E0904 07:10:01.241712       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0904 07:10:01.241731       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0904 07:10:01.242872       1 handler_proxy.go:99] no RequestInfo found in the context
	E0904 07:10:01.242970       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0904 07:10:01.242982       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0904 07:10:35.612286       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 07:11:19.301552       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 07:11:58.516412       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0904 07:12:01.242786       1 handler_proxy.go:99] no RequestInfo found in the context
	E0904 07:12:01.242848       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0904 07:12:01.242866       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0904 07:12:01.243921       1 handler_proxy.go:99] no RequestInfo found in the context
	E0904 07:12:01.244027       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0904 07:12:01.244041       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0904 07:12:30.201341       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [c3ca0bd7fce1d06b880bcd74e973b0fba7c77720f38d0d574df75a25383a8c46] <==
	I0904 07:06:35.629759       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:07:05.548883       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:07:05.637066       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:07:35.552867       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:07:35.644072       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:08:05.558320       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:08:05.651508       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:08:35.562165       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:08:35.659282       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:09:05.567587       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:09:05.666502       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:09:35.571850       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:09:35.673474       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:10:05.576527       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:10:05.681706       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:10:35.581482       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:10:35.689352       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:11:05.586292       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:11:05.696837       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:11:35.592239       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:11:35.704229       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:12:05.597569       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:12:05.712474       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0904 07:12:35.603308       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0904 07:12:35.721806       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [0cb99392ff213a29f74c574a1f464514f40d13fb8b2ad415260fbe656f861f78] <==
	I0904 06:54:02.313807       1 server_linux.go:53] "Using iptables proxy"
	I0904 06:54:02.481309       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0904 06:54:02.581712       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0904 06:54:02.581749       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E0904 06:54:02.581851       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0904 06:54:02.702806       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0904 06:54:02.702870       1 server_linux.go:132] "Using iptables Proxier"
	I0904 06:54:02.707621       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0904 06:54:02.708314       1 server.go:527] "Version info" version="v1.34.0"
	I0904 06:54:02.708370       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 06:54:02.709852       1 config.go:200] "Starting service config controller"
	I0904 06:54:02.709885       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0904 06:54:02.709884       1 config.go:403] "Starting serviceCIDR config controller"
	I0904 06:54:02.709909       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0904 06:54:02.709994       1 config.go:106] "Starting endpoint slice config controller"
	I0904 06:54:02.710022       1 config.go:309] "Starting node config controller"
	I0904 06:54:02.710031       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0904 06:54:02.710024       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0904 06:54:02.810407       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0904 06:54:02.810421       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0904 06:54:02.810433       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0904 06:54:02.810456       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1def9424a8c382f52727704fa488898d6b4bf4fb2cc4750aa640e9abba2caeef] <==
	I0904 06:54:00.312077       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 06:54:00.316820       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0904 06:54:00.317029       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 06:54:00.317632       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 06:54:00.317057       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0904 06:54:00.418138       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0904 06:54:00.418315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0904 06:54:00.418419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0904 06:54:00.418459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0904 06:54:00.418505       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0904 06:54:00.418619       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0904 06:54:00.418691       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0904 06:54:00.418610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0904 06:54:00.418834       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0904 06:54:00.418851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0904 06:54:00.419014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0904 06:54:00.419027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0904 06:54:00.419173       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0904 06:54:00.419173       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0904 06:54:00.419220       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0904 06:54:00.419334       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0904 06:54:00.419440       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0904 06:54:00.419604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0904 06:54:00.420531       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I0904 06:54:01.717847       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 04 07:11:56 default-k8s-diff-port-520775 kubelet[808]: E0904 07:11:56.169001     808 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756969916168774291  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 07:11:56 default-k8s-diff-port-520775 kubelet[808]: E0904 07:11:56.169042     808 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756969916168774291  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 07:11:56 default-k8s-diff-port-520775 kubelet[808]: I0904 07:11:56.943740     808 scope.go:117] "RemoveContainer" containerID="37e77ce3e0901fe0d52d9cf4e4c160003eb2ab0e23d71771f117b37fc900d366"
	Sep 04 07:11:56 default-k8s-diff-port-520775 kubelet[808]: E0904 07:11:56.943980     808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-w8cp6_kubernetes-dashboard(964b57fc-3542-48a2-a344-ab740188dfea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w8cp6" podUID="964b57fc-3542-48a2-a344-ab740188dfea"
	Sep 04 07:11:57 default-k8s-diff-port-520775 kubelet[808]: E0904 07:11:57.945083     808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-f6t79" podUID="c1e25916-a16a-4ee2-9aaa-895d41ffbe6e"
	Sep 04 07:12:06 default-k8s-diff-port-520775 kubelet[808]: E0904 07:12:06.170081     808 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756969926169897969  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 07:12:06 default-k8s-diff-port-520775 kubelet[808]: E0904 07:12:06.170116     808 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756969926169897969  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 07:12:06 default-k8s-diff-port-520775 kubelet[808]: E0904 07:12:06.945179     808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-gws8j" podUID="16bf9326-2429-4d6b-a6ed-6dc44262c35e"
	Sep 04 07:12:08 default-k8s-diff-port-520775 kubelet[808]: I0904 07:12:08.943927     808 scope.go:117] "RemoveContainer" containerID="37e77ce3e0901fe0d52d9cf4e4c160003eb2ab0e23d71771f117b37fc900d366"
	Sep 04 07:12:08 default-k8s-diff-port-520775 kubelet[808]: E0904 07:12:08.944118     808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-w8cp6_kubernetes-dashboard(964b57fc-3542-48a2-a344-ab740188dfea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w8cp6" podUID="964b57fc-3542-48a2-a344-ab740188dfea"
	Sep 04 07:12:08 default-k8s-diff-port-520775 kubelet[808]: E0904 07:12:08.944559     808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-f6t79" podUID="c1e25916-a16a-4ee2-9aaa-895d41ffbe6e"
	Sep 04 07:12:16 default-k8s-diff-port-520775 kubelet[808]: E0904 07:12:16.171295     808 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756969936171050822  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 07:12:16 default-k8s-diff-port-520775 kubelet[808]: E0904 07:12:16.171340     808 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756969936171050822  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 07:12:20 default-k8s-diff-port-520775 kubelet[808]: E0904 07:12:20.944900     808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-gws8j" podUID="16bf9326-2429-4d6b-a6ed-6dc44262c35e"
	Sep 04 07:12:21 default-k8s-diff-port-520775 kubelet[808]: I0904 07:12:21.943341     808 scope.go:117] "RemoveContainer" containerID="37e77ce3e0901fe0d52d9cf4e4c160003eb2ab0e23d71771f117b37fc900d366"
	Sep 04 07:12:21 default-k8s-diff-port-520775 kubelet[808]: E0904 07:12:21.943574     808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-w8cp6_kubernetes-dashboard(964b57fc-3542-48a2-a344-ab740188dfea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w8cp6" podUID="964b57fc-3542-48a2-a344-ab740188dfea"
	Sep 04 07:12:21 default-k8s-diff-port-520775 kubelet[808]: E0904 07:12:21.944579     808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-f6t79" podUID="c1e25916-a16a-4ee2-9aaa-895d41ffbe6e"
	Sep 04 07:12:26 default-k8s-diff-port-520775 kubelet[808]: E0904 07:12:26.172476     808 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756969946172237328  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 07:12:26 default-k8s-diff-port-520775 kubelet[808]: E0904 07:12:26.172518     808 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756969946172237328  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 07:12:32 default-k8s-diff-port-520775 kubelet[808]: E0904 07:12:32.944751     808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-gws8j" podUID="16bf9326-2429-4d6b-a6ed-6dc44262c35e"
	Sep 04 07:12:35 default-k8s-diff-port-520775 kubelet[808]: I0904 07:12:35.944554     808 scope.go:117] "RemoveContainer" containerID="37e77ce3e0901fe0d52d9cf4e4c160003eb2ab0e23d71771f117b37fc900d366"
	Sep 04 07:12:35 default-k8s-diff-port-520775 kubelet[808]: E0904 07:12:35.944806     808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-w8cp6_kubernetes-dashboard(964b57fc-3542-48a2-a344-ab740188dfea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w8cp6" podUID="964b57fc-3542-48a2-a344-ab740188dfea"
	Sep 04 07:12:35 default-k8s-diff-port-520775 kubelet[808]: E0904 07:12:35.945734     808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-f6t79" podUID="c1e25916-a16a-4ee2-9aaa-895d41ffbe6e"
	Sep 04 07:12:36 default-k8s-diff-port-520775 kubelet[808]: E0904 07:12:36.173572     808 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756969956173332964  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 04 07:12:36 default-k8s-diff-port-520775 kubelet[808]: E0904 07:12:36.173778     808 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756969956173332964  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	
	
	==> storage-provisioner [177f8ea7a363c3c3b050aea14ac0273afcac9985a9fe1621523044d67f709d9a] <==
	I0904 06:54:02.308824       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0904 06:54:32.311321       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [651dd18c7303d259954eb0ef6f0d2406a279376559ba295730ef62f148ff5b40] <==
	W0904 07:12:19.039914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:21.042976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:21.047030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:23.049752       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:23.053502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:25.056471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:25.180654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:27.183318       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:27.189157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:29.193387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:29.198337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:31.201949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:31.299082       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:33.302693       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:33.309613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:35.312743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:35.316909       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:37.320119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:37.324456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:39.328366       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:39.335594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:41.338842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:41.345449       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:43.348847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 07:12:43.410448       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-520775 -n default-k8s-diff-port-520775
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-520775 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-gws8j kubernetes-dashboard-855c9754f9-f6t79
helpers_test.go:282: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-520775 describe pod metrics-server-746fcd58dc-gws8j kubernetes-dashboard-855c9754f9-f6t79
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-520775 describe pod metrics-server-746fcd58dc-gws8j kubernetes-dashboard-855c9754f9-f6t79: exit status 1 (124.435172ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-gws8j" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-f6t79" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context default-k8s-diff-port-520775 describe pod metrics-server-746fcd58dc-gws8j kubernetes-dashboard-855c9754f9-f6t79: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (544.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (929.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-444288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-444288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: exit status 80 (15m29.390640318s)

                                                
                                                
-- stdout --
	* [calico-444288] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-1516970/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1516970/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "calico-444288" primary control-plane node in "calico-444288" cluster
	* Pulling base image v0.0.47-1756936034-21409 ...
	* Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 07:11:44.174639 1830801 out.go:360] Setting OutFile to fd 1 ...
	I0904 07:11:44.174917 1830801 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 07:11:44.174926 1830801 out.go:374] Setting ErrFile to fd 2...
	I0904 07:11:44.174931 1830801 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 07:11:44.175154 1830801 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1516970/.minikube/bin
	I0904 07:11:44.175726 1830801 out.go:368] Setting JSON to false
	I0904 07:11:44.176930 1830801 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":17654,"bootTime":1756952250,"procs":282,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 07:11:44.177046 1830801 start.go:140] virtualization: kvm guest
	I0904 07:11:44.179294 1830801 out.go:179] * [calico-444288] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 07:11:44.180753 1830801 out.go:179]   - MINIKUBE_LOCATION=21409
	I0904 07:11:44.180772 1830801 notify.go:220] Checking for updates...
	I0904 07:11:44.183263 1830801 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 07:11:44.184580 1830801 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-1516970/kubeconfig
	I0904 07:11:44.185798 1830801 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1516970/.minikube
	I0904 07:11:44.187221 1830801 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 07:11:44.188938 1830801 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 07:11:44.190747 1830801 config.go:182] Loaded profile config "default-k8s-diff-port-520775": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 07:11:44.190835 1830801 config.go:182] Loaded profile config "embed-certs-589812": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 07:11:44.190906 1830801 config.go:182] Loaded profile config "kindnet-444288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 07:11:44.191023 1830801 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 07:11:44.213983 1830801 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0904 07:11:44.214131 1830801 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 07:11:44.263032 1830801 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-09-04 07:11:44.254237423 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 07:11:44.263150 1830801 docker.go:318] overlay module found
	I0904 07:11:44.264983 1830801 out.go:179] * Using the docker driver based on user configuration
	I0904 07:11:44.266967 1830801 start.go:304] selected driver: docker
	I0904 07:11:44.266988 1830801 start.go:918] validating driver "docker" against <nil>
	I0904 07:11:44.267007 1830801 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 07:11:44.268058 1830801 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 07:11:44.318020 1830801 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-09-04 07:11:44.309027624 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 07:11:44.318227 1830801 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0904 07:11:44.318553 1830801 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 07:11:44.320553 1830801 out.go:179] * Using Docker driver with root privileges
	I0904 07:11:44.321846 1830801 cni.go:84] Creating CNI manager for "calico"
	I0904 07:11:44.321871 1830801 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I0904 07:11:44.321965 1830801 start.go:348] cluster config:
	{Name:calico-444288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-444288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I0904 07:11:44.323328 1830801 out.go:179] * Starting "calico-444288" primary control-plane node in "calico-444288" cluster
	I0904 07:11:44.324403 1830801 cache.go:123] Beginning downloading kic base image for docker with crio
	I0904 07:11:44.325604 1830801 out.go:179] * Pulling base image v0.0.47-1756936034-21409 ...
	I0904 07:11:44.326783 1830801 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 07:11:44.326825 1830801 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21409-1516970/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0904 07:11:44.326835 1830801 cache.go:58] Caching tarball of preloaded images
	I0904 07:11:44.326888 1830801 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc in local docker daemon
	I0904 07:11:44.326943 1830801 preload.go:172] Found /home/jenkins/minikube-integration/21409-1516970/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0904 07:11:44.326957 1830801 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0904 07:11:44.327091 1830801 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/calico-444288/config.json ...
	I0904 07:11:44.327120 1830801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/calico-444288/config.json: {Name:mk3b4091eac6b7f06d8093afaa85e7d60de5a2a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 07:11:44.347678 1830801 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc in local docker daemon, skipping pull
	I0904 07:11:44.347708 1830801 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc exists in daemon, skipping load
	I0904 07:11:44.347731 1830801 cache.go:232] Successfully downloaded all kic artifacts
	I0904 07:11:44.347762 1830801 start.go:360] acquireMachinesLock for calico-444288: {Name:mkdefbc04aaef6f646cb33f95b45f3f770567c35 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 07:11:44.347902 1830801 start.go:364] duration metric: took 116.89µs to acquireMachinesLock for "calico-444288"
	I0904 07:11:44.347942 1830801 start.go:93] Provisioning new machine with config: &{Name:calico-444288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-444288 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0904 07:11:44.348028 1830801 start.go:125] createHost starting for "" (driver="docker")
	I0904 07:11:44.350006 1830801 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0904 07:11:44.350225 1830801 start.go:159] libmachine.API.Create for "calico-444288" (driver="docker")
	I0904 07:11:44.350255 1830801 client.go:168] LocalClient.Create starting
	I0904 07:11:44.350320 1830801 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca.pem
	I0904 07:11:44.350349 1830801 main.go:141] libmachine: Decoding PEM data...
	I0904 07:11:44.350360 1830801 main.go:141] libmachine: Parsing certificate...
	I0904 07:11:44.350417 1830801 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/cert.pem
	I0904 07:11:44.350442 1830801 main.go:141] libmachine: Decoding PEM data...
	I0904 07:11:44.350469 1830801 main.go:141] libmachine: Parsing certificate...
	I0904 07:11:44.350905 1830801 cli_runner.go:164] Run: docker network inspect calico-444288 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0904 07:11:44.367916 1830801 cli_runner.go:211] docker network inspect calico-444288 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0904 07:11:44.368029 1830801 network_create.go:284] running [docker network inspect calico-444288] to gather additional debugging logs...
	I0904 07:11:44.368066 1830801 cli_runner.go:164] Run: docker network inspect calico-444288
	W0904 07:11:44.385086 1830801 cli_runner.go:211] docker network inspect calico-444288 returned with exit code 1
	I0904 07:11:44.385118 1830801 network_create.go:287] error running [docker network inspect calico-444288]: docker network inspect calico-444288: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-444288 not found
	I0904 07:11:44.385143 1830801 network_create.go:289] output of [docker network inspect calico-444288]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-444288 not found
	
	** /stderr **
	I0904 07:11:44.385256 1830801 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0904 07:11:44.402923 1830801 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6a5bc02d2a27 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:b0:fb:06:b8:46} reservation:<nil>}
	I0904 07:11:44.404000 1830801 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-7f4544d24f56 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:c9:24:c9:76:17} reservation:<nil>}
	I0904 07:11:44.404916 1830801 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8d033df89e75 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:42:94:35:ac:d0:4e} reservation:<nil>}
	I0904 07:11:44.405734 1830801 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-f02d8e0bb4f2 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:d2:06:da:b5:75:c1} reservation:<nil>}
	I0904 07:11:44.406849 1830801 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f02bf0}
	I0904 07:11:44.406873 1830801 network_create.go:124] attempt to create docker network calico-444288 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0904 07:11:44.406942 1830801 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-444288 calico-444288
	I0904 07:11:44.461264 1830801 network_create.go:108] docker network calico-444288 192.168.85.0/24 created
	I0904 07:11:44.461297 1830801 kic.go:121] calculated static IP "192.168.85.2" for the "calico-444288" container
	I0904 07:11:44.461366 1830801 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0904 07:11:44.478504 1830801 cli_runner.go:164] Run: docker volume create calico-444288 --label name.minikube.sigs.k8s.io=calico-444288 --label created_by.minikube.sigs.k8s.io=true
	I0904 07:11:44.496544 1830801 oci.go:103] Successfully created a docker volume calico-444288
	I0904 07:11:44.496617 1830801 cli_runner.go:164] Run: docker run --rm --name calico-444288-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-444288 --entrypoint /usr/bin/test -v calico-444288:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc -d /var/lib
	I0904 07:11:44.955644 1830801 oci.go:107] Successfully prepared a docker volume calico-444288
	I0904 07:11:44.955733 1830801 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 07:11:44.955819 1830801 kic.go:194] Starting extracting preloaded images to volume ...
	I0904 07:11:44.955893 1830801 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-1516970/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-444288:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc -I lz4 -xf /preloaded.tar -C /extractDir
	I0904 07:11:49.455776 1830801 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-1516970/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-444288:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc -I lz4 -xf /preloaded.tar -C /extractDir: (4.499833336s)
	I0904 07:11:49.455862 1830801 kic.go:203] duration metric: took 4.500073891s to extract preloaded images to volume ...
	W0904 07:11:49.455994 1830801 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0904 07:11:49.456122 1830801 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0904 07:11:49.505047 1830801 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-444288 --name calico-444288 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-444288 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-444288 --network calico-444288 --ip 192.168.85.2 --volume calico-444288:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc
	I0904 07:11:49.782413 1830801 cli_runner.go:164] Run: docker container inspect calico-444288 --format={{.State.Running}}
	I0904 07:11:49.801665 1830801 cli_runner.go:164] Run: docker container inspect calico-444288 --format={{.State.Status}}
	I0904 07:11:49.831886 1830801 cli_runner.go:164] Run: docker exec calico-444288 stat /var/lib/dpkg/alternatives/iptables
	I0904 07:11:49.876944 1830801 oci.go:144] the created container "calico-444288" has a running status.
	I0904 07:11:49.876978 1830801 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-1516970/.minikube/machines/calico-444288/id_rsa...
	I0904 07:11:50.018106 1830801 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-1516970/.minikube/machines/calico-444288/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0904 07:11:50.041176 1830801 cli_runner.go:164] Run: docker container inspect calico-444288 --format={{.State.Status}}
	I0904 07:11:50.062424 1830801 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0904 07:11:50.062449 1830801 kic_runner.go:114] Args: [docker exec --privileged calico-444288 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0904 07:11:50.112042 1830801 cli_runner.go:164] Run: docker container inspect calico-444288 --format={{.State.Status}}
	I0904 07:11:50.133435 1830801 machine.go:93] provisionDockerMachine start ...
	I0904 07:11:50.133556 1830801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-444288
	I0904 07:11:50.165678 1830801 main.go:141] libmachine: Using SSH client type: native
	I0904 07:11:50.165976 1830801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 34305 <nil> <nil>}
	I0904 07:11:50.165992 1830801 main.go:141] libmachine: About to run SSH command:
	hostname
	I0904 07:11:50.166735 1830801 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35054->127.0.0.1:34305: read: connection reset by peer
	I0904 07:11:53.287568 1830801 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-444288
	
	I0904 07:11:53.287599 1830801 ubuntu.go:182] provisioning hostname "calico-444288"
	I0904 07:11:53.287667 1830801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-444288
	I0904 07:11:53.305781 1830801 main.go:141] libmachine: Using SSH client type: native
	I0904 07:11:53.306089 1830801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 34305 <nil> <nil>}
	I0904 07:11:53.306107 1830801 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-444288 && echo "calico-444288" | sudo tee /etc/hostname
	I0904 07:11:53.439939 1830801 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-444288
	
	I0904 07:11:53.440028 1830801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-444288
	I0904 07:11:53.459563 1830801 main.go:141] libmachine: Using SSH client type: native
	I0904 07:11:53.459874 1830801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 34305 <nil> <nil>}
	I0904 07:11:53.459895 1830801 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-444288' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-444288/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-444288' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0904 07:11:53.580468 1830801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 07:11:53.580510 1830801 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-1516970/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-1516970/.minikube}
	I0904 07:11:53.580541 1830801 ubuntu.go:190] setting up certificates
	I0904 07:11:53.580558 1830801 provision.go:84] configureAuth start
	I0904 07:11:53.580657 1830801 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-444288
	I0904 07:11:53.599530 1830801 provision.go:143] copyHostCerts
	I0904 07:11:53.599609 1830801 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.pem, removing ...
	I0904 07:11:53.599624 1830801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.pem
	I0904 07:11:53.599711 1830801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.pem (1082 bytes)
	I0904 07:11:53.599898 1830801 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1516970/.minikube/cert.pem, removing ...
	I0904 07:11:53.599919 1830801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1516970/.minikube/cert.pem
	I0904 07:11:53.599952 1830801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-1516970/.minikube/cert.pem (1123 bytes)
	I0904 07:11:53.600065 1830801 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1516970/.minikube/key.pem, removing ...
	I0904 07:11:53.600075 1830801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1516970/.minikube/key.pem
	I0904 07:11:53.600099 1830801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-1516970/.minikube/key.pem (1675 bytes)
	I0904 07:11:53.600162 1830801 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca-key.pem org=jenkins.calico-444288 san=[127.0.0.1 192.168.85.2 calico-444288 localhost minikube]
	I0904 07:11:53.677171 1830801 provision.go:177] copyRemoteCerts
	I0904 07:11:53.677245 1830801 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0904 07:11:53.677297 1830801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-444288
	I0904 07:11:53.695836 1830801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34305 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/calico-444288/id_rsa Username:docker}
	I0904 07:11:53.785908 1830801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0904 07:11:53.809866 1830801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0904 07:11:53.832884 1830801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0904 07:11:53.857270 1830801 provision.go:87] duration metric: took 276.688994ms to configureAuth
	I0904 07:11:53.857307 1830801 ubuntu.go:206] setting minikube options for container-runtime
	I0904 07:11:53.857529 1830801 config.go:182] Loaded profile config "calico-444288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 07:11:53.857650 1830801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-444288
	I0904 07:11:53.876910 1830801 main.go:141] libmachine: Using SSH client type: native
	I0904 07:11:53.877233 1830801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 127.0.0.1 34305 <nil> <nil>}
	I0904 07:11:53.877258 1830801 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0904 07:11:54.088972 1830801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0904 07:11:54.089002 1830801 machine.go:96] duration metric: took 3.955541867s to provisionDockerMachine
	I0904 07:11:54.089011 1830801 client.go:171] duration metric: took 9.73875013s to LocalClient.Create
	I0904 07:11:54.089027 1830801 start.go:167] duration metric: took 9.738803209s to libmachine.API.Create "calico-444288"
	I0904 07:11:54.089034 1830801 start.go:293] postStartSetup for "calico-444288" (driver="docker")
	I0904 07:11:54.089043 1830801 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0904 07:11:54.089106 1830801 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0904 07:11:54.089145 1830801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-444288
	I0904 07:11:54.107590 1830801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34305 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/calico-444288/id_rsa Username:docker}
	I0904 07:11:54.197095 1830801 ssh_runner.go:195] Run: cat /etc/os-release
	I0904 07:11:54.200408 1830801 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0904 07:11:54.200437 1830801 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0904 07:11:54.200444 1830801 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0904 07:11:54.200451 1830801 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0904 07:11:54.200461 1830801 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1516970/.minikube/addons for local assets ...
	I0904 07:11:54.200517 1830801 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1516970/.minikube/files for local assets ...
	I0904 07:11:54.200607 1830801 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-1516970/.minikube/files/etc/ssl/certs/15207162.pem -> 15207162.pem in /etc/ssl/certs
	I0904 07:11:54.200697 1830801 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0904 07:11:54.209154 1830801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/files/etc/ssl/certs/15207162.pem --> /etc/ssl/certs/15207162.pem (1708 bytes)
	I0904 07:11:54.232348 1830801 start.go:296] duration metric: took 143.293896ms for postStartSetup
	I0904 07:11:54.232830 1830801 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-444288
	I0904 07:11:54.251896 1830801 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/calico-444288/config.json ...
	I0904 07:11:54.252188 1830801 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 07:11:54.252249 1830801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-444288
	I0904 07:11:54.270324 1830801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34305 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/calico-444288/id_rsa Username:docker}
	I0904 07:11:54.360996 1830801 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0904 07:11:54.365276 1830801 start.go:128] duration metric: took 10.017221888s to createHost
	I0904 07:11:54.365305 1830801 start.go:83] releasing machines lock for "calico-444288", held for 10.017387664s
	I0904 07:11:54.365380 1830801 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-444288
	I0904 07:11:54.383490 1830801 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0904 07:11:54.383512 1830801 ssh_runner.go:195] Run: cat /version.json
	I0904 07:11:54.383569 1830801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-444288
	I0904 07:11:54.383602 1830801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-444288
	I0904 07:11:54.402640 1830801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34305 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/calico-444288/id_rsa Username:docker}
	I0904 07:11:54.403886 1830801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34305 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/calico-444288/id_rsa Username:docker}
	I0904 07:11:54.557786 1830801 ssh_runner.go:195] Run: systemctl --version
	I0904 07:11:54.562413 1830801 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0904 07:11:54.703461 1830801 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0904 07:11:54.708132 1830801 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 07:11:54.728003 1830801 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0904 07:11:54.728089 1830801 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 07:11:54.755698 1830801 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0904 07:11:54.755720 1830801 start.go:495] detecting cgroup driver to use...
	I0904 07:11:54.755753 1830801 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0904 07:11:54.755817 1830801 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0904 07:11:54.770673 1830801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 07:11:54.781083 1830801 docker.go:218] disabling cri-docker service (if available) ...
	I0904 07:11:54.781134 1830801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0904 07:11:54.795368 1830801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0904 07:11:54.809522 1830801 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0904 07:11:54.895458 1830801 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0904 07:11:54.976440 1830801 docker.go:234] disabling docker service ...
	I0904 07:11:54.976506 1830801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0904 07:11:54.995540 1830801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0904 07:11:55.007193 1830801 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0904 07:11:55.090049 1830801 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0904 07:11:55.174861 1830801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0904 07:11:55.187124 1830801 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 07:11:55.202839 1830801 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0904 07:11:55.202894 1830801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:11:55.212454 1830801 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0904 07:11:55.212520 1830801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:11:55.222280 1830801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:11:55.231343 1830801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:11:55.241199 1830801 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0904 07:11:55.250149 1830801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:11:55.259611 1830801 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:11:55.274673 1830801 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:11:55.284138 1830801 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0904 07:11:55.291782 1830801 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0904 07:11:55.299556 1830801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 07:11:55.373180 1830801 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0904 07:11:55.477917 1830801 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0904 07:11:55.478027 1830801 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0904 07:11:55.481585 1830801 start.go:563] Will wait 60s for crictl version
	I0904 07:11:55.481644 1830801 ssh_runner.go:195] Run: which crictl
	I0904 07:11:55.485037 1830801 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0904 07:11:55.519480 1830801 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0904 07:11:55.519556 1830801 ssh_runner.go:195] Run: crio --version
	I0904 07:11:55.554024 1830801 ssh_runner.go:195] Run: crio --version
	I0904 07:11:55.593819 1830801 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0904 07:11:55.595123 1830801 cli_runner.go:164] Run: docker network inspect calico-444288 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0904 07:11:55.613976 1830801 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0904 07:11:55.617859 1830801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 07:11:55.628708 1830801 kubeadm.go:875] updating cluster {Name:calico-444288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-444288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0904 07:11:55.628842 1830801 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 07:11:55.628906 1830801 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 07:11:55.698835 1830801 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 07:11:55.698855 1830801 crio.go:433] Images already preloaded, skipping extraction
	I0904 07:11:55.698898 1830801 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 07:11:55.733197 1830801 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 07:11:55.733223 1830801 cache_images.go:85] Images are preloaded, skipping loading
	I0904 07:11:55.733231 1830801 kubeadm.go:926] updating node { 192.168.85.2 8443 v1.34.0 crio true true} ...
	I0904 07:11:55.733332 1830801 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=calico-444288 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:calico-444288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I0904 07:11:55.733396 1830801 ssh_runner.go:195] Run: crio config
	I0904 07:11:55.778174 1830801 cni.go:84] Creating CNI manager for "calico"
	I0904 07:11:55.778198 1830801 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0904 07:11:55.778220 1830801 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-444288 NodeName:calico-444288 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0904 07:11:55.778356 1830801 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-444288"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0904 07:11:55.778420 1830801 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0904 07:11:55.787316 1830801 binaries.go:44] Found k8s binaries, skipping transfer
	I0904 07:11:55.787392 1830801 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0904 07:11:55.795942 1830801 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0904 07:11:55.813246 1830801 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0904 07:11:55.831678 1830801 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I0904 07:11:55.850208 1830801 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0904 07:11:55.854263 1830801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 07:11:55.866825 1830801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 07:11:55.948686 1830801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 07:11:55.962929 1830801 certs.go:68] Setting up /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/calico-444288 for IP: 192.168.85.2
	I0904 07:11:55.962959 1830801 certs.go:194] generating shared ca certs ...
	I0904 07:11:55.962976 1830801 certs.go:226] acquiring lock for ca certs: {Name:mk2d06825c36f44304767b415a9a93c84edb2667 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 07:11:55.963155 1830801 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.key
	I0904 07:11:55.963229 1830801 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/proxy-client-ca.key
	I0904 07:11:55.963250 1830801 certs.go:256] generating profile certs ...
	I0904 07:11:55.963332 1830801 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/calico-444288/client.key
	I0904 07:11:55.963352 1830801 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/calico-444288/client.crt with IP's: []
	I0904 07:11:56.401149 1830801 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/calico-444288/client.crt ...
	I0904 07:11:56.401184 1830801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/calico-444288/client.crt: {Name:mk06af862ecb15db3bb63421d04e7f3f737bc36c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 07:11:56.401403 1830801 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/calico-444288/client.key ...
	I0904 07:11:56.401420 1830801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/calico-444288/client.key: {Name:mk2ff33768eb72a5a5f1f82e8ae460a3cc55616d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 07:11:56.401540 1830801 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/calico-444288/apiserver.key.9ed27a37
	I0904 07:11:56.401563 1830801 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/calico-444288/apiserver.crt.9ed27a37 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0904 07:11:56.700972 1830801 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/calico-444288/apiserver.crt.9ed27a37 ...
	I0904 07:11:56.701007 1830801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/calico-444288/apiserver.crt.9ed27a37: {Name:mk3cfb51673c1d9d4a53febdcd2a56e5ba7cbf8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 07:11:56.701198 1830801 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/calico-444288/apiserver.key.9ed27a37 ...
	I0904 07:11:56.701215 1830801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/calico-444288/apiserver.key.9ed27a37: {Name:mk95d84baaf2f217a936d4fe22f4efd33f696539 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 07:11:56.701324 1830801 certs.go:381] copying /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/calico-444288/apiserver.crt.9ed27a37 -> /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/calico-444288/apiserver.crt
	I0904 07:11:56.701432 1830801 certs.go:385] copying /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/calico-444288/apiserver.key.9ed27a37 -> /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/calico-444288/apiserver.key
	I0904 07:11:56.701516 1830801 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/calico-444288/proxy-client.key
	I0904 07:11:56.701541 1830801 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/calico-444288/proxy-client.crt with IP's: []
	I0904 07:11:57.229100 1830801 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/calico-444288/proxy-client.crt ...
	I0904 07:11:57.229131 1830801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/calico-444288/proxy-client.crt: {Name:mk2cbceca7ea877dcc5ea5ab685b13519bb10167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 07:11:57.229312 1830801 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/calico-444288/proxy-client.key ...
	I0904 07:11:57.229325 1830801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/calico-444288/proxy-client.key: {Name:mkf7d5b9313d608f1db04b50b381ac1a11b9e106 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 07:11:57.229498 1830801 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/1520716.pem (1338 bytes)
	W0904 07:11:57.229544 1830801 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/1520716_empty.pem, impossibly tiny 0 bytes
	I0904 07:11:57.229555 1830801 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca-key.pem (1675 bytes)
	I0904 07:11:57.229578 1830801 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/ca.pem (1082 bytes)
	I0904 07:11:57.229600 1830801 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/cert.pem (1123 bytes)
	I0904 07:11:57.229620 1830801 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/key.pem (1675 bytes)
	I0904 07:11:57.229656 1830801 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1516970/.minikube/files/etc/ssl/certs/15207162.pem (1708 bytes)
	I0904 07:11:57.230193 1830801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0904 07:11:57.255633 1830801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0904 07:11:57.278438 1830801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0904 07:11:57.301710 1830801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0904 07:11:57.325626 1830801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/calico-444288/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0904 07:11:57.348639 1830801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/calico-444288/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0904 07:11:57.372750 1830801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/calico-444288/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0904 07:11:57.396140 1830801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/calico-444288/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0904 07:11:57.419389 1830801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/certs/1520716.pem --> /usr/share/ca-certificates/1520716.pem (1338 bytes)
	I0904 07:11:57.442293 1830801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/files/etc/ssl/certs/15207162.pem --> /usr/share/ca-certificates/15207162.pem (1708 bytes)
	I0904 07:11:57.465664 1830801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0904 07:11:57.488636 1830801 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0904 07:11:57.505549 1830801 ssh_runner.go:195] Run: openssl version
	I0904 07:11:57.510725 1830801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1520716.pem && ln -fs /usr/share/ca-certificates/1520716.pem /etc/ssl/certs/1520716.pem"
	I0904 07:11:57.520500 1830801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1520716.pem
	I0904 07:11:57.523824 1830801 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  4 06:07 /usr/share/ca-certificates/1520716.pem
	I0904 07:11:57.523884 1830801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1520716.pem
	I0904 07:11:57.530164 1830801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1520716.pem /etc/ssl/certs/51391683.0"
	I0904 07:11:57.539565 1830801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15207162.pem && ln -fs /usr/share/ca-certificates/15207162.pem /etc/ssl/certs/15207162.pem"
	I0904 07:11:57.548740 1830801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15207162.pem
	I0904 07:11:57.551979 1830801 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  4 06:07 /usr/share/ca-certificates/15207162.pem
	I0904 07:11:57.552039 1830801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15207162.pem
	I0904 07:11:57.558748 1830801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15207162.pem /etc/ssl/certs/3ec20f2e.0"
	I0904 07:11:57.568211 1830801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0904 07:11:57.578144 1830801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0904 07:11:57.582398 1830801 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  4 06:00 /usr/share/ca-certificates/minikubeCA.pem
	I0904 07:11:57.582470 1830801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0904 07:11:57.589423 1830801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0904 07:11:57.599214 1830801 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0904 07:11:57.603061 1830801 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0904 07:11:57.603124 1830801 kubeadm.go:392] StartCluster: {Name:calico-444288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-444288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 07:11:57.603220 1830801 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0904 07:11:57.603301 1830801 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0904 07:11:57.640751 1830801 cri.go:89] found id: ""
	I0904 07:11:57.640823 1830801 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0904 07:11:57.649804 1830801 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0904 07:11:57.658392 1830801 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0904 07:11:57.658448 1830801 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0904 07:11:57.666938 1830801 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0904 07:11:57.666956 1830801 kubeadm.go:157] found existing configuration files:
	
	I0904 07:11:57.667013 1830801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0904 07:11:57.675744 1830801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0904 07:11:57.675827 1830801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0904 07:11:57.684179 1830801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0904 07:11:57.692449 1830801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0904 07:11:57.692517 1830801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0904 07:11:57.700682 1830801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0904 07:11:57.709491 1830801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0904 07:11:57.709544 1830801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0904 07:11:57.718283 1830801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0904 07:11:57.727333 1830801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0904 07:11:57.727392 1830801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0904 07:11:57.736041 1830801 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0904 07:11:57.790042 1830801 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0904 07:11:57.790334 1830801 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1083-gcp\n", err: exit status 1
	I0904 07:11:57.843332 1830801 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0904 07:12:07.799471 1830801 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0904 07:12:07.799554 1830801 kubeadm.go:310] [preflight] Running pre-flight checks
	I0904 07:12:07.799654 1830801 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0904 07:12:07.799735 1830801 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1083-gcp
	I0904 07:12:07.799828 1830801 kubeadm.go:310] OS: Linux
	I0904 07:12:07.799921 1830801 kubeadm.go:310] CGROUPS_CPU: enabled
	I0904 07:12:07.800009 1830801 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0904 07:12:07.800059 1830801 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0904 07:12:07.800103 1830801 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0904 07:12:07.800146 1830801 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0904 07:12:07.800220 1830801 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0904 07:12:07.800288 1830801 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0904 07:12:07.800350 1830801 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0904 07:12:07.800417 1830801 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0904 07:12:07.800510 1830801 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0904 07:12:07.800641 1830801 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0904 07:12:07.800776 1830801 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0904 07:12:07.800865 1830801 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0904 07:12:07.802474 1830801 out.go:252]   - Generating certificates and keys ...
	I0904 07:12:07.802577 1830801 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0904 07:12:07.802695 1830801 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0904 07:12:07.802805 1830801 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0904 07:12:07.802903 1830801 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0904 07:12:07.802961 1830801 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0904 07:12:07.803006 1830801 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0904 07:12:07.803056 1830801 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0904 07:12:07.803169 1830801 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [calico-444288 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0904 07:12:07.803228 1830801 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0904 07:12:07.803319 1830801 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [calico-444288 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0904 07:12:07.803378 1830801 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0904 07:12:07.803446 1830801 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0904 07:12:07.803491 1830801 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0904 07:12:07.803559 1830801 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0904 07:12:07.803601 1830801 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0904 07:12:07.803650 1830801 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0904 07:12:07.803706 1830801 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0904 07:12:07.803851 1830801 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0904 07:12:07.803927 1830801 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0904 07:12:07.804047 1830801 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0904 07:12:07.804159 1830801 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0904 07:12:07.805264 1830801 out.go:252]   - Booting up control plane ...
	I0904 07:12:07.805356 1830801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0904 07:12:07.805448 1830801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0904 07:12:07.805536 1830801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0904 07:12:07.805686 1830801 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0904 07:12:07.805867 1830801 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0904 07:12:07.805986 1830801 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0904 07:12:07.806095 1830801 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0904 07:12:07.806151 1830801 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0904 07:12:07.806294 1830801 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0904 07:12:07.806400 1830801 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0904 07:12:07.806457 1830801 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.000883212s
	I0904 07:12:07.806546 1830801 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0904 07:12:07.806659 1830801 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I0904 07:12:07.806779 1830801 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0904 07:12:07.806905 1830801 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0904 07:12:07.807023 1830801 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.074782775s
	I0904 07:12:07.807122 1830801 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.826027574s
	I0904 07:12:07.807230 1830801 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.501628298s
	I0904 07:12:07.807365 1830801 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0904 07:12:07.807563 1830801 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0904 07:12:07.807619 1830801 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0904 07:12:07.807832 1830801 kubeadm.go:310] [mark-control-plane] Marking the node calico-444288 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0904 07:12:07.807927 1830801 kubeadm.go:310] [bootstrap-token] Using token: v19bdo.mcn8l8b305y096l7
	I0904 07:12:07.809329 1830801 out.go:252]   - Configuring RBAC rules ...
	I0904 07:12:07.809465 1830801 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0904 07:12:07.809578 1830801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0904 07:12:07.809771 1830801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0904 07:12:07.809946 1830801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0904 07:12:07.810108 1830801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0904 07:12:07.810249 1830801 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0904 07:12:07.810413 1830801 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0904 07:12:07.810487 1830801 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0904 07:12:07.810557 1830801 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0904 07:12:07.810568 1830801 kubeadm.go:310] 
	I0904 07:12:07.810663 1830801 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0904 07:12:07.810680 1830801 kubeadm.go:310] 
	I0904 07:12:07.810783 1830801 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0904 07:12:07.810797 1830801 kubeadm.go:310] 
	I0904 07:12:07.810830 1830801 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0904 07:12:07.810899 1830801 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0904 07:12:07.810974 1830801 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0904 07:12:07.810987 1830801 kubeadm.go:310] 
	I0904 07:12:07.811049 1830801 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0904 07:12:07.811057 1830801 kubeadm.go:310] 
	I0904 07:12:07.811106 1830801 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0904 07:12:07.811115 1830801 kubeadm.go:310] 
	I0904 07:12:07.811172 1830801 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0904 07:12:07.811254 1830801 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0904 07:12:07.811359 1830801 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0904 07:12:07.811373 1830801 kubeadm.go:310] 
	I0904 07:12:07.811495 1830801 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0904 07:12:07.811606 1830801 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0904 07:12:07.811617 1830801 kubeadm.go:310] 
	I0904 07:12:07.811751 1830801 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token v19bdo.mcn8l8b305y096l7 \
	I0904 07:12:07.811906 1830801 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2d9630fca242e1003deb76bc0b7b7c54e9b6615fdc1e764ca81723c39d5691bf \
	I0904 07:12:07.811937 1830801 kubeadm.go:310] 	--control-plane 
	I0904 07:12:07.811952 1830801 kubeadm.go:310] 
	I0904 07:12:07.812065 1830801 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0904 07:12:07.812079 1830801 kubeadm.go:310] 
	I0904 07:12:07.812168 1830801 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token v19bdo.mcn8l8b305y096l7 \
	I0904 07:12:07.812285 1830801 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2d9630fca242e1003deb76bc0b7b7c54e9b6615fdc1e764ca81723c39d5691bf 
	I0904 07:12:07.812329 1830801 cni.go:84] Creating CNI manager for "calico"
	I0904 07:12:07.813791 1830801 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I0904 07:12:07.816037 1830801 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0904 07:12:07.816059 1830801 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (539470 bytes)
	I0904 07:12:07.835948 1830801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0904 07:12:09.355154 1830801 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.519157202s)
	I0904 07:12:09.355205 1830801 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0904 07:12:09.355328 1830801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-444288 minikube.k8s.io/updated_at=2025_09_04T07_12_09_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=c3fa37de45a2901b215fab008201edf72ce5a1ff minikube.k8s.io/name=calico-444288 minikube.k8s.io/primary=true
	I0904 07:12:09.355329 1830801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 07:12:09.362863 1830801 ops.go:34] apiserver oom_adj: -16
	I0904 07:12:09.468095 1830801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 07:12:09.968228 1830801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 07:12:10.469021 1830801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 07:12:10.968750 1830801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 07:12:11.468682 1830801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 07:12:11.968262 1830801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 07:12:12.468529 1830801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 07:12:12.537702 1830801 kubeadm.go:1105] duration metric: took 3.182445162s to wait for elevateKubeSystemPrivileges
	I0904 07:12:12.537738 1830801 kubeadm.go:394] duration metric: took 14.934620133s to StartCluster
	I0904 07:12:12.537758 1830801 settings.go:142] acquiring lock: {Name:mk2d1c8a569b62879275d6daa2b799b595d6bca2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 07:12:12.537834 1830801 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-1516970/kubeconfig
	I0904 07:12:12.539691 1830801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1516970/kubeconfig: {Name:mkbb6a6dae4a65ddd44b276a5562dc0a264116a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 07:12:12.540004 1830801 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0904 07:12:12.540033 1830801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0904 07:12:12.540104 1830801 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0904 07:12:12.540204 1830801 config.go:182] Loaded profile config "calico-444288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 07:12:12.540223 1830801 addons.go:69] Setting storage-provisioner=true in profile "calico-444288"
	I0904 07:12:12.540229 1830801 addons.go:69] Setting default-storageclass=true in profile "calico-444288"
	I0904 07:12:12.540245 1830801 addons.go:238] Setting addon storage-provisioner=true in "calico-444288"
	I0904 07:12:12.540248 1830801 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-444288"
	I0904 07:12:12.540277 1830801 host.go:66] Checking if "calico-444288" exists ...
	I0904 07:12:12.540599 1830801 cli_runner.go:164] Run: docker container inspect calico-444288 --format={{.State.Status}}
	I0904 07:12:12.540742 1830801 cli_runner.go:164] Run: docker container inspect calico-444288 --format={{.State.Status}}
	I0904 07:12:12.541835 1830801 out.go:179] * Verifying Kubernetes components...
	I0904 07:12:12.543244 1830801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 07:12:12.565705 1830801 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 07:12:12.565850 1830801 addons.go:238] Setting addon default-storageclass=true in "calico-444288"
	I0904 07:12:12.565904 1830801 host.go:66] Checking if "calico-444288" exists ...
	I0904 07:12:12.566468 1830801 cli_runner.go:164] Run: docker container inspect calico-444288 --format={{.State.Status}}
	I0904 07:12:12.567156 1830801 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 07:12:12.567179 1830801 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0904 07:12:12.567232 1830801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-444288
	I0904 07:12:12.586667 1830801 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0904 07:12:12.586694 1830801 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0904 07:12:12.586756 1830801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-444288
	I0904 07:12:12.587072 1830801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34305 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/calico-444288/id_rsa Username:docker}
	I0904 07:12:12.612798 1830801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34305 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/calico-444288/id_rsa Username:docker}
	I0904 07:12:12.638529 1830801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0904 07:12:12.739952 1830801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 07:12:12.823258 1830801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 07:12:12.903067 1830801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0904 07:12:13.500772 1830801 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I0904 07:12:13.502634 1830801 node_ready.go:35] waiting up to 15m0s for node "calico-444288" to be "Ready" ...
	I0904 07:12:13.778601 1830801 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0904 07:12:13.779690 1830801 addons.go:514] duration metric: took 1.239594649s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0904 07:12:14.006136 1830801 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-444288" context rescaled to 1 replicas
	W0904 07:12:15.506062 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:12:17.506266 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:12:19.506573 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:12:22.005811 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:12:24.006407 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:12:26.506185 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:12:28.506312 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:12:30.506365 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:12:32.506458 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:12:35.005947 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:12:37.505568 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:12:39.506482 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:12:42.006037 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:12:44.051586 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:12:46.506101 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:12:48.506454 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:12:50.506881 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:12:53.005940 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:12:55.505579 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:12:57.506855 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:13:00.006127 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:13:02.006667 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:13:04.506296 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:13:07.006177 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:13:09.506475 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:13:12.005562 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:13:14.006093 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:13:16.006588 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:13:18.506137 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:13:21.006031 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:13:23.006293 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:13:25.007759 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:13:27.506072 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:13:30.005575 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:13:32.006051 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:13:34.006140 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:13:36.506220 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:13:39.006935 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:13:41.505854 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:13:44.006839 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:13:46.506385 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:13:49.006337 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:13:51.006670 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:13:53.506174 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:13:55.506763 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:13:58.006156 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:14:00.006302 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:14:02.506297 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:14:05.012179 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:14:07.505386 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:14:09.505773 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:14:11.507119 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:14:14.006861 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:14:16.507628 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:14:19.005873 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:14:21.006037 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:14:23.006674 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:14:25.505762 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:14:27.505983 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:14:30.006165 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:14:32.006320 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:14:34.006456 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:14:36.505780 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:14:38.506340 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:14:41.006105 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:14:43.505515 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:14:45.505900 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:14:47.506397 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:14:50.006419 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:14:52.006534 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:14:54.506928 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:14:57.005980 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:14:59.505815 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:15:01.506288 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:15:04.006599 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:15:06.506123 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:15:09.005542 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:15:11.005753 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:15:13.506246 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:15:16.006281 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:15:18.006355 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:15:20.506418 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:15:23.006203 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:15:25.506423 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:15:28.005615 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:15:30.005886 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:15:32.006506 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:15:34.506487 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:15:37.006263 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:15:39.506092 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:15:42.005650 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:15:44.505720 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:15:47.005738 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:15:49.506006 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:15:51.506530 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:15:54.005751 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:15:56.505372 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:15:58.505900 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:16:00.506202 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:16:03.006336 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:16:05.505349 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:16:07.505623 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:16:09.505906 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:16:11.506599 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:16:14.006245 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:16:16.506080 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:16:18.506283 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:16:21.006179 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:16:23.506186 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:16:26.005563 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:16:28.005837 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:16:30.006589 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:16:32.505831 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:16:35.005735 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:16:37.006382 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:16:39.506225 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:16:42.005903 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:16:44.506007 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:16:47.006335 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:16:49.506108 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:16:52.005905 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:16:54.006212 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:16:56.506102 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:16:59.006222 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:17:01.505881 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:17:04.005466 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:17:06.005822 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:17:08.505837 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:17:10.506077 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:17:13.005445 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:17:15.006525 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:17:17.505984 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:17:19.506259 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:17:22.006323 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:17:24.505949 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:17:26.506295 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:17:29.005494 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:17:31.006323 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:17:33.505893 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:17:36.005798 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:17:38.505786 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:17:40.506062 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:17:43.005716 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:17:45.505770 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:17:47.505861 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:17:49.505924 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:17:52.005944 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:17:54.505868 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:17:57.005816 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:17:59.505584 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:18:02.008052 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:18:04.505902 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:18:06.506199 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:18:09.006047 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:18:11.506225 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:18:14.006107 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:18:16.505456 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:18:18.506686 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:18:21.006034 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:18:23.506070 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:18:26.006311 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:18:28.506521 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:18:31.005751 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:18:33.005839 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:18:35.506048 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:18:38.006233 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:18:40.006371 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:18:42.505561 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:18:45.005886 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:18:47.506142 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:18:50.005755 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:18:52.505675 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:18:55.005820 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:18:57.505754 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:19:00.006160 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:19:02.506289 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:19:05.005694 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:19:07.505841 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:19:10.006142 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:19:12.505980 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:19:14.506150 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:19:17.006071 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:19:19.006144 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:19:21.506335 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:19:24.005405 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:19:26.005991 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:19:28.506283 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:19:31.005875 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:19:33.505718 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:19:35.506512 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:19:38.005854 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:19:40.006248 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:19:42.505949 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:19:45.005803 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:19:47.006494 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:19:49.505835 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:19:52.006084 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:19:54.006402 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:19:56.505391 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:19:58.505529 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:20:00.505963 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:20:03.005432 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:20:05.005627 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:20:07.505748 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:20:10.005576 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:20:12.505532 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:20:14.506170 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:20:17.005892 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:20:19.505731 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:20:21.505789 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:20:24.005781 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:20:26.505511 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:20:28.506216 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:20:31.005661 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:20:33.006215 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:20:35.506251 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:20:38.005776 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:20:40.005861 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:20:42.006196 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:20:44.006367 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:20:46.505751 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:20:48.506346 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:20:51.006185 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:20:53.505628 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:20:56.006495 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:20:58.505279 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:21:00.506054 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:21:03.005654 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:21:05.006157 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:21:07.505822 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:21:09.505951 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:21:12.005429 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:21:14.006576 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:21:16.505916 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:21:19.005612 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:21:21.005822 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:21:23.005983 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:21:25.505911 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:21:28.005776 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:21:30.506065 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:21:33.005946 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:21:35.006053 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:21:37.506191 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:21:39.506229 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:21:42.005937 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:21:44.505925 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:21:47.005978 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:21:49.006155 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:21:51.505752 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:21:54.006047 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:21:56.505721 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:21:58.506262 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:22:01.005464 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:22:03.005999 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:22:05.006163 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:22:07.006255 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:22:09.505719 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:22:11.506329 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:22:14.005547 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:22:16.505949 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:22:18.506318 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:22:20.506505 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:22:23.005791 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:22:25.505674 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:22:27.506196 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:22:30.005370 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:22:32.505463 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:22:34.505947 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:22:36.506094 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:22:39.005697 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:22:41.005827 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:22:43.505611 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:22:45.506467 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:22:48.006163 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:22:50.505850 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:22:53.005846 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:22:55.505462 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:22:58.005200 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:23:00.006257 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:23:02.505558 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:23:05.005720 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:23:07.505522 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:23:09.506498 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:23:12.005469 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:23:14.505682 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:23:17.005410 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:23:19.005982 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:23:21.506167 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:23:24.006165 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:23:26.506046 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:23:29.006226 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:23:31.505776 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:23:34.005794 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:23:36.505868 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:23:39.005668 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:23:41.505898 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:23:44.005905 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:23:46.506437 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:23:49.005487 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:23:51.005637 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:23:53.006084 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:23:55.506142 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:23:57.506580 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:24:00.006043 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:24:02.505912 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:24:05.006058 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:24:07.505465 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:24:09.505597 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:24:11.505959 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:24:14.005803 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:24:16.005949 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:24:18.006103 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:24:20.006381 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:24:22.505955 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:24:24.506062 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:24:26.506148 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:24:29.005375 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:24:31.005995 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:24:33.006078 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:24:35.505703 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:24:37.506195 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:24:40.006351 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:24:42.506291 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:24:45.005392 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:24:47.005561 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:24:49.505730 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:24:52.005387 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:24:54.005575 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:24:56.006233 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:24:58.505345 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:25:00.505988 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:25:03.005826 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:25:05.006272 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:25:07.505902 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:25:09.505980 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:25:11.506352 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:25:13.506516 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:25:16.005461 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:25:18.005495 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:25:20.505492 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:25:23.005426 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:25:25.005876 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:25:27.006562 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:25:29.505628 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:25:32.005575 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:25:34.005662 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:25:36.505395 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:25:39.005123 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:25:41.005948 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:25:43.505512 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:25:45.505574 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:25:47.506159 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:25:50.005862 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:25:52.505548 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:25:55.005165 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:25:57.005866 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:25:59.505443 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:26:02.005550 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:26:04.505485 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:26:06.506109 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:26:09.005987 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:26:11.505596 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:26:14.005389 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:26:16.005831 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:26:18.505672 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:26:20.506485 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:26:23.005404 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:26:25.006021 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:26:27.505589 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:26:29.506466 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:26:32.005784 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:26:34.505907 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:26:36.506077 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:26:39.006149 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:26:41.505712 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:26:44.005593 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:26:46.006017 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:26:48.505787 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:26:51.005749 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:26:53.505973 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:26:56.005507 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:26:58.506139 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:27:01.006029 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:27:03.505565 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:27:06.005594 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:27:08.505309 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:27:10.506101 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	W0904 07:27:13.005907 1830801 node_ready.go:57] node "calico-444288" has "Ready":"False" status (will retry)
	I0904 07:27:13.503586 1830801 node_ready.go:38] duration metric: took 15m0.000911402s for node "calico-444288" to be "Ready" ...
	I0904 07:27:13.505993 1830801 out.go:203] 
	W0904 07:27:13.507906 1830801 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W0904 07:27:13.507948 1830801 out.go:285] * 
	* 
	W0904 07:27:13.509796 1830801 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0904 07:27:13.511497 1830801 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (929.42s)

                                                
                                    

Test pass (283/326)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.04
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.0/json-events 4.25
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.06
18 TestDownloadOnly/v1.34.0/DeleteAll 0.21
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 1.15
21 TestBinaryMirror 0.78
22 TestOffline 96.33
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 155.15
31 TestAddons/serial/GCPAuth/Namespaces 0.11
32 TestAddons/serial/GCPAuth/FakeCredentials 9.47
35 TestAddons/parallel/Registry 14.52
36 TestAddons/parallel/RegistryCreds 0.82
38 TestAddons/parallel/InspektorGadget 6.27
39 TestAddons/parallel/MetricsServer 5.87
41 TestAddons/parallel/CSI 60.51
42 TestAddons/parallel/Headlamp 16.43
43 TestAddons/parallel/CloudSpanner 5.52
44 TestAddons/parallel/LocalPath 15.19
45 TestAddons/parallel/NvidiaDevicePlugin 6.47
46 TestAddons/parallel/Yakd 11.91
47 TestAddons/parallel/AmdGpuDevicePlugin 5.47
48 TestAddons/StoppedEnableDisable 12.1
49 TestCertOptions 24.99
50 TestCertExpiration 233.09
52 TestForceSystemdFlag 26.22
53 TestForceSystemdEnv 29.54
55 TestKVMDriverInstallOrUpdate 1.28
59 TestErrorSpam/setup 22.27
60 TestErrorSpam/start 0.58
61 TestErrorSpam/status 0.87
62 TestErrorSpam/pause 1.48
63 TestErrorSpam/unpause 1.68
64 TestErrorSpam/stop 1.35
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 69.65
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 31.03
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.06
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.77
76 TestFunctional/serial/CacheCmd/cache/add_local 0.93
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.65
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 32.23
85 TestFunctional/serial/ComponentHealth 0.06
86 TestFunctional/serial/LogsCmd 1.34
87 TestFunctional/serial/LogsFileCmd 1.34
88 TestFunctional/serial/InvalidService 4.02
90 TestFunctional/parallel/ConfigCmd 0.35
92 TestFunctional/parallel/DryRun 0.35
93 TestFunctional/parallel/InternationalLanguage 0.16
94 TestFunctional/parallel/StatusCmd 1.26
99 TestFunctional/parallel/AddonsCmd 0.14
100 TestFunctional/parallel/PersistentVolumeClaim 33.34
102 TestFunctional/parallel/SSHCmd 0.53
103 TestFunctional/parallel/CpCmd 1.57
104 TestFunctional/parallel/MySQL 21.67
105 TestFunctional/parallel/FileSync 0.25
106 TestFunctional/parallel/CertSync 1.54
110 TestFunctional/parallel/NodeLabels 0.06
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.52
114 TestFunctional/parallel/License 0.25
116 TestFunctional/parallel/Version/short 0.05
117 TestFunctional/parallel/Version/components 0.45
118 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
119 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
120 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
121 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
122 TestFunctional/parallel/ImageCommands/ImageBuild 2.54
123 TestFunctional/parallel/ImageCommands/Setup 0.39
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.23
125 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
126 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
127 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.91
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.17
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.28
131 TestFunctional/parallel/ImageCommands/ImageRemove 1.03
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 3.49
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.55
134 TestFunctional/parallel/ProfileCmd/profile_not_create 0.57
135 TestFunctional/parallel/ProfileCmd/profile_list 0.66
136 TestFunctional/parallel/ProfileCmd/profile_json_output 0.61
137 TestFunctional/parallel/MountCmd/any-port 6.83
138 TestFunctional/parallel/MountCmd/specific-port 1.77
139 TestFunctional/parallel/MountCmd/VerifyCleanup 1.89
141 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.47
142 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
144 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.3
145 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
146 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
150 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
151 TestFunctional/parallel/ServiceCmd/List 1.67
152 TestFunctional/parallel/ServiceCmd/JSONOutput 1.68
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 186.7
164 TestMultiControlPlane/serial/DeployApp 5.85
165 TestMultiControlPlane/serial/PingHostFromPods 1.07
166 TestMultiControlPlane/serial/AddWorkerNode 54.18
167 TestMultiControlPlane/serial/NodeLabels 0.06
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.84
169 TestMultiControlPlane/serial/CopyFile 15.6
170 TestMultiControlPlane/serial/StopSecondaryNode 12.52
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.66
172 TestMultiControlPlane/serial/RestartSecondaryNode 30.51
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.01
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 128.4
175 TestMultiControlPlane/serial/DeleteSecondaryNode 15.39
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.65
177 TestMultiControlPlane/serial/StopCluster 35.59
178 TestMultiControlPlane/serial/RestartCluster 73.91
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.64
180 TestMultiControlPlane/serial/AddSecondaryNode 78.34
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.83
185 TestJSONOutput/start/Command 69.23
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.68
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.59
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.79
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.21
210 TestKicCustomNetwork/create_custom_network 33.24
211 TestKicCustomNetwork/use_default_bridge_network 25.18
212 TestKicExistingNetwork 26.43
213 TestKicCustomSubnet 27.58
214 TestKicStaticIP 26.15
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 54.84
219 TestMountStart/serial/StartWithMountFirst 8.05
220 TestMountStart/serial/VerifyMountFirst 0.25
221 TestMountStart/serial/StartWithMountSecond 8.16
222 TestMountStart/serial/VerifyMountSecond 0.24
223 TestMountStart/serial/DeleteFirst 1.59
224 TestMountStart/serial/VerifyMountPostDelete 0.25
225 TestMountStart/serial/Stop 1.18
226 TestMountStart/serial/RestartStopped 7.16
227 TestMountStart/serial/VerifyMountPostStop 0.24
230 TestMultiNode/serial/FreshStart2Nodes 123.06
231 TestMultiNode/serial/DeployApp2Nodes 4.66
232 TestMultiNode/serial/PingHostFrom2Pods 0.75
233 TestMultiNode/serial/AddNode 56.33
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.61
236 TestMultiNode/serial/CopyFile 8.83
237 TestMultiNode/serial/StopNode 2.09
238 TestMultiNode/serial/StartAfterStop 7.31
239 TestMultiNode/serial/RestartKeepsNodes 72.82
240 TestMultiNode/serial/DeleteNode 5.19
241 TestMultiNode/serial/StopMultiNode 23.77
242 TestMultiNode/serial/RestartMultiNode 50.71
243 TestMultiNode/serial/ValidateNameConflict 23.52
248 TestPreload 114.69
250 TestScheduledStopUnix 97.76
253 TestInsufficientStorage 12.25
254 TestRunningBinaryUpgrade 45.85
256 TestKubernetesUpgrade 321.21
257 TestMissingContainerUpgrade 97.69
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
260 TestStoppedBinaryUpgrade/Setup 0.48
261 TestNoKubernetes/serial/StartWithK8s 48.62
262 TestStoppedBinaryUpgrade/Upgrade 66.81
263 TestNoKubernetes/serial/StartWithStopK8s 6.2
264 TestNoKubernetes/serial/Start 4.65
265 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
266 TestNoKubernetes/serial/ProfileList 6.2
267 TestNoKubernetes/serial/Stop 1.2
268 TestNoKubernetes/serial/StartNoArgs 7.73
269 TestStoppedBinaryUpgrade/MinikubeLogs 0.98
270 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.31
285 TestNetworkPlugins/group/false 5.87
290 TestPause/serial/Start 74.87
291 TestPause/serial/SecondStartNoReconfiguration 24.52
293 TestStartStop/group/old-k8s-version/serial/FirstStart 50.39
294 TestPause/serial/Pause 0.74
295 TestPause/serial/VerifyStatus 0.33
296 TestPause/serial/Unpause 0.61
297 TestPause/serial/PauseAgain 0.74
298 TestPause/serial/DeletePaused 2.64
299 TestPause/serial/VerifyDeletedResources 4.77
301 TestStartStop/group/no-preload/serial/FirstStart 51.39
302 TestStartStop/group/old-k8s-version/serial/DeployApp 10.25
303 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.26
304 TestStartStop/group/old-k8s-version/serial/Stop 11.92
305 TestStartStop/group/no-preload/serial/DeployApp 9.22
306 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
307 TestStartStop/group/old-k8s-version/serial/SecondStart 52.12
308 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.07
309 TestStartStop/group/no-preload/serial/Stop 11.93
310 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
311 TestStartStop/group/no-preload/serial/SecondStart 48.44
315 TestStartStop/group/embed-certs/serial/FirstStart 75.28
317 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 42.12
318 TestStartStop/group/embed-certs/serial/DeployApp 10.22
319 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.82
320 TestStartStop/group/embed-certs/serial/Stop 11.82
321 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.22
322 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.82
323 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.95
324 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
325 TestStartStop/group/embed-certs/serial/SecondStart 47.8
326 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
327 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 48.86
334 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
335 TestStartStop/group/old-k8s-version/serial/Pause 2.71
337 TestStartStop/group/newest-cni/serial/FirstStart 30.91
338 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
339 TestStartStop/group/no-preload/serial/Pause 2.76
340 TestNetworkPlugins/group/auto/Start 73.77
341 TestStartStop/group/newest-cni/serial/DeployApp 0
342 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.76
343 TestStartStop/group/newest-cni/serial/Stop 1.2
344 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
345 TestStartStop/group/newest-cni/serial/SecondStart 14.5
346 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
348 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
349 TestStartStop/group/newest-cni/serial/Pause 2.7
350 TestNetworkPlugins/group/kindnet/Start 73.09
351 TestNetworkPlugins/group/auto/KubeletFlags 0.27
352 TestNetworkPlugins/group/auto/NetCatPod 9.18
353 TestNetworkPlugins/group/auto/DNS 0.12
354 TestNetworkPlugins/group/auto/Localhost 0.11
355 TestNetworkPlugins/group/auto/HairPin 0.11
357 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
358 TestNetworkPlugins/group/kindnet/KubeletFlags 0.26
359 TestNetworkPlugins/group/kindnet/NetCatPod 10.19
360 TestNetworkPlugins/group/kindnet/DNS 0.14
361 TestNetworkPlugins/group/kindnet/Localhost 0.12
362 TestNetworkPlugins/group/kindnet/HairPin 0.12
363 TestNetworkPlugins/group/custom-flannel/Start 48.49
364 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
365 TestStartStop/group/embed-certs/serial/Pause 2.67
366 TestNetworkPlugins/group/enable-default-cni/Start 69.07
367 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.3
368 TestStartStop/group/default-k8s-diff-port/serial/Pause 3
369 TestNetworkPlugins/group/flannel/Start 55.38
370 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.27
371 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.18
372 TestNetworkPlugins/group/custom-flannel/DNS 0.13
373 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
374 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
375 TestNetworkPlugins/group/bridge/Start 69.19
376 TestNetworkPlugins/group/flannel/ControllerPod 6.01
377 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.35
378 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.75
379 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
380 TestNetworkPlugins/group/flannel/NetCatPod 11.17
381 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
382 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
383 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
384 TestNetworkPlugins/group/flannel/DNS 0.13
385 TestNetworkPlugins/group/flannel/Localhost 0.12
386 TestNetworkPlugins/group/flannel/HairPin 0.17
387 TestNetworkPlugins/group/bridge/KubeletFlags 0.27
388 TestNetworkPlugins/group/bridge/NetCatPod 10.18
389 TestNetworkPlugins/group/bridge/DNS 0.12
390 TestNetworkPlugins/group/bridge/Localhost 0.1
391 TestNetworkPlugins/group/bridge/HairPin 0.11
x
+
TestDownloadOnly/v1.28.0/json-events (5.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-649828 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-649828 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.037916377s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0904 06:00:14.289136 1520716 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
I0904 06:00:14.289222 1520716 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21409-1516970/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-649828
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-649828: exit status 85 (63.098429ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-649828 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-649828 │ jenkins │ v1.36.0 │ 04 Sep 25 06:00 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/04 06:00:09
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 06:00:09.293909 1520728 out.go:360] Setting OutFile to fd 1 ...
	I0904 06:00:09.294134 1520728 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:00:09.294142 1520728 out.go:374] Setting ErrFile to fd 2...
	I0904 06:00:09.294147 1520728 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:00:09.294344 1520728 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1516970/.minikube/bin
	W0904 06:00:09.294471 1520728 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21409-1516970/.minikube/config/config.json: open /home/jenkins/minikube-integration/21409-1516970/.minikube/config/config.json: no such file or directory
	I0904 06:00:09.295109 1520728 out.go:368] Setting JSON to true
	I0904 06:00:09.296180 1520728 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":13359,"bootTime":1756952250,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 06:00:09.296238 1520728 start.go:140] virtualization: kvm guest
	I0904 06:00:09.298554 1520728 out.go:99] [download-only-649828] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	W0904 06:00:09.298694 1520728 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21409-1516970/.minikube/cache/preloaded-tarball: no such file or directory
	I0904 06:00:09.298757 1520728 notify.go:220] Checking for updates...
	I0904 06:00:09.300151 1520728 out.go:171] MINIKUBE_LOCATION=21409
	I0904 06:00:09.301548 1520728 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 06:00:09.303119 1520728 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21409-1516970/kubeconfig
	I0904 06:00:09.304386 1520728 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1516970/.minikube
	I0904 06:00:09.305581 1520728 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0904 06:00:09.307996 1520728 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0904 06:00:09.308252 1520728 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 06:00:09.330985 1520728 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0904 06:00:09.331060 1520728 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 06:00:09.380827 1520728 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:62 SystemTime:2025-09-04 06:00:09.372008134 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 06:00:09.380933 1520728 docker.go:318] overlay module found
	I0904 06:00:09.382750 1520728 out.go:99] Using the docker driver based on user configuration
	I0904 06:00:09.382786 1520728 start.go:304] selected driver: docker
	I0904 06:00:09.382801 1520728 start.go:918] validating driver "docker" against <nil>
	I0904 06:00:09.382911 1520728 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 06:00:09.430351 1520728 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:62 SystemTime:2025-09-04 06:00:09.421130361 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 06:00:09.430592 1520728 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0904 06:00:09.431139 1520728 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0904 06:00:09.431352 1520728 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0904 06:00:09.433000 1520728 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-649828 host does not exist
	  To start a cluster, run: "minikube start -p download-only-649828"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-649828
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (4.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-777410 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-777410 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.251167211s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (4.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0904 06:00:18.953407 1520716 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
I0904 06:00:18.953467 1520716 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21409-1516970/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-777410
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-777410: exit status 85 (63.217824ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-649828 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-649828 │ jenkins │ v1.36.0 │ 04 Sep 25 06:00 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.36.0 │ 04 Sep 25 06:00 UTC │ 04 Sep 25 06:00 UTC │
	│ delete  │ -p download-only-649828                                                                                                                                                   │ download-only-649828 │ jenkins │ v1.36.0 │ 04 Sep 25 06:00 UTC │ 04 Sep 25 06:00 UTC │
	│ start   │ -o=json --download-only -p download-only-777410 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-777410 │ jenkins │ v1.36.0 │ 04 Sep 25 06:00 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/04 06:00:14
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 06:00:14.744869 1521064 out.go:360] Setting OutFile to fd 1 ...
	I0904 06:00:14.745001 1521064 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:00:14.745011 1521064 out.go:374] Setting ErrFile to fd 2...
	I0904 06:00:14.745016 1521064 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:00:14.745223 1521064 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1516970/.minikube/bin
	I0904 06:00:14.745828 1521064 out.go:368] Setting JSON to true
	I0904 06:00:14.746804 1521064 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":13365,"bootTime":1756952250,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 06:00:14.746859 1521064 start.go:140] virtualization: kvm guest
	I0904 06:00:14.748642 1521064 out.go:99] [download-only-777410] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 06:00:14.748837 1521064 notify.go:220] Checking for updates...
	I0904 06:00:14.750004 1521064 out.go:171] MINIKUBE_LOCATION=21409
	I0904 06:00:14.751358 1521064 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 06:00:14.752906 1521064 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21409-1516970/kubeconfig
	I0904 06:00:14.754155 1521064 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1516970/.minikube
	I0904 06:00:14.755323 1521064 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0904 06:00:14.757357 1521064 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0904 06:00:14.757597 1521064 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 06:00:14.779676 1521064 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0904 06:00:14.779770 1521064 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 06:00:14.826606 1521064 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:50 SystemTime:2025-09-04 06:00:14.81762155 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 06:00:14.826749 1521064 docker.go:318] overlay module found
	I0904 06:00:14.828515 1521064 out.go:99] Using the docker driver based on user configuration
	I0904 06:00:14.828546 1521064 start.go:304] selected driver: docker
	I0904 06:00:14.828557 1521064 start.go:918] validating driver "docker" against <nil>
	I0904 06:00:14.828658 1521064 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 06:00:14.873763 1521064 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:50 SystemTime:2025-09-04 06:00:14.865132241 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 06:00:14.873941 1521064 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0904 06:00:14.874472 1521064 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0904 06:00:14.874655 1521064 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0904 06:00:14.876510 1521064 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-777410 host does not exist
	  To start a cluster, run: "minikube start -p download-only-777410"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-777410
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.15s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-123134 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-123134" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-123134
--- PASS: TestDownloadOnlyKic (1.15s)

                                                
                                    
x
+
TestBinaryMirror (0.78s)

                                                
                                                
=== RUN   TestBinaryMirror
I0904 06:00:20.785819 1520716 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-285738 --alsologtostderr --binary-mirror http://127.0.0.1:33975 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-285738" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-285738
--- PASS: TestBinaryMirror (0.78s)

                                                
                                    
x
+
TestOffline (96.33s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-468259 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-468259 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m33.758790883s)
helpers_test.go:175: Cleaning up "offline-crio-468259" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-468259
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-468259: (2.575864493s)
--- PASS: TestOffline (96.33s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-306757
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-306757: exit status 85 (55.899799ms)

                                                
                                                
-- stdout --
	* Profile "addons-306757" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-306757"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-306757
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-306757: exit status 85 (54.202469ms)

                                                
                                                
-- stdout --
	* Profile "addons-306757" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-306757"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (155.15s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-306757 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-306757 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m35.145391966s)
--- PASS: TestAddons/Setup (155.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-306757 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-306757 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.47s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-306757 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-306757 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e8f49885-099f-4a14-b083-309542695def] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e8f49885-099f-4a14-b083-309542695def] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003861854s
addons_test.go:694: (dbg) Run:  kubectl --context addons-306757 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-306757 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-306757 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.47s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 28.706298ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-s8qqg" [8143b624-da88-4323-8441-706602e975b8] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003741734s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-hklmr" [d5eee02a-bf3e-4376-a820-fe7cb6e83409] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003566763s
addons_test.go:392: (dbg) Run:  kubectl --context addons-306757 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-306757 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-306757 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.714821755s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-306757 ip
2025/09/04 06:03:28 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-306757 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.52s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.82s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.662997ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-306757
addons_test.go:332: (dbg) Run:  kubectl --context addons-306757 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-306757 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.82s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-8q767" [a689f8d4-83a1-4d67-8116-aee6490b5453] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003765864s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-306757 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.87s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.183553ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-fclpw" [606933e4-ec1f-4aa3-9826-a2f054695f6a] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003235727s
addons_test.go:463: (dbg) Run:  kubectl --context addons-306757 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-306757 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.87s)

                                                
                                    
x
+
TestAddons/parallel/CSI (60.51s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0904 06:03:14.911550 1520716 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.08103ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-306757 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306757 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306757 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-306757 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [d7fd37aa-89d8-4497-90e9-12896e8df001] Pending
helpers_test.go:352: "task-pv-pod" [d7fd37aa-89d8-4497-90e9-12896e8df001] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [d7fd37aa-89d8-4497-90e9-12896e8df001] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003983879s
addons_test.go:572: (dbg) Run:  kubectl --context addons-306757 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-306757 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-306757 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-306757 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-306757 delete pod task-pv-pod: (1.40110193s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-306757 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-306757 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306757 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306757 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306757 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306757 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306757 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306757 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306757 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306757 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306757 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306757 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306757 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306757 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306757 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306757 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306757 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-306757 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [d7002686-6d3f-4b2d-8642-165164562eb5] Pending
helpers_test.go:352: "task-pv-pod-restore" [d7002686-6d3f-4b2d-8642-165164562eb5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [d7002686-6d3f-4b2d-8642-165164562eb5] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003997538s
addons_test.go:614: (dbg) Run:  kubectl --context addons-306757 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-306757 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-306757 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-306757 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-306757 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-306757 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.56455204s)
--- PASS: TestAddons/parallel/CSI (60.51s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-306757 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6f46646d79-bg8dq" [5a80658a-ab86-4ffd-9f0c-7e36c9773298] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6f46646d79-bg8dq" [5a80658a-ab86-4ffd-9f0c-7e36c9773298] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.004068036s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-306757 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-306757 addons disable headlamp --alsologtostderr -v=1: (5.686280621s)
--- PASS: TestAddons/parallel/Headlamp (16.43s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-c55d4cb6d-x5v8n" [191de4d5-5e94-481b-bd04-8ee7c66b7f7e] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004335246s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-306757 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (15.19s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-306757 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-306757 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306757 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306757 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306757 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306757 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306757 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306757 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306757 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306757 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306757 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306757 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-306757 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [252cbe0a-c07f-40a7-ac18-6c652c7be7e1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [252cbe0a-c07f-40a7-ac18-6c652c7be7e1] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [252cbe0a-c07f-40a7-ac18-6c652c7be7e1] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003604381s
addons_test.go:967: (dbg) Run:  kubectl --context addons-306757 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-306757 ssh "cat /opt/local-path-provisioner/pvc-3834d7e9-4691-4682-8525-fbde797f55c6_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-306757 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-306757 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-306757 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (15.19s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.47s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-qljm9" [8e7ef4b6-e9c1-42de-adf1-b264f8fd5ce2] Running
I0904 06:03:14.914578 1520716 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0904 06:03:14.914595 1520716 kapi.go:107] duration metric: took 3.073788ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.00457176s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-306757 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.47s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-zb6hl" [1b4af077-0a71-42b6-86bb-1bd4c9b5a703] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003741551s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-306757 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-306757 addons disable yakd --alsologtostderr -v=1: (5.906272767s)
--- PASS: TestAddons/parallel/Yakd (11.91s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.47s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-rp9pp" [bca304f8-9027-4298-bd42-61a669d3e210] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003509852s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-306757 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.47s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.1s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-306757
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-306757: (11.851386868s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-306757
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-306757
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-306757
--- PASS: TestAddons/StoppedEnableDisable (12.10s)

                                                
                                    
x
+
TestCertOptions (24.99s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-387485 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-387485 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (22.58442858s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-387485 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-387485 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-387485 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-387485" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-387485
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-387485: (1.854822038s)
--- PASS: TestCertOptions (24.99s)

                                                
                                    
x
+
TestCertExpiration (233.09s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-620042 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-620042 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (24.590453686s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-620042 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-620042 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (26.401268514s)
helpers_test.go:175: Cleaning up "cert-expiration-620042" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-620042
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-620042: (2.09724827s)
--- PASS: TestCertExpiration (233.09s)

                                                
                                    
x
+
TestForceSystemdFlag (26.22s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-399706 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-399706 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (23.565708108s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-399706 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-399706" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-399706
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-399706: (2.358824646s)
--- PASS: TestForceSystemdFlag (26.22s)

                                                
                                    
x
+
TestForceSystemdEnv (29.54s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-254309 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-254309 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (25.303821194s)
helpers_test.go:175: Cleaning up "force-systemd-env-254309" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-254309
E0904 06:47:57.336305 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-254309: (4.231161323s)
--- PASS: TestForceSystemdEnv (29.54s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.28s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0904 06:47:51.201481 1520716 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0904 06:47:51.201626 1520716 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0904 06:47:51.248630 1520716 install.go:62] docker-machine-driver-kvm2: exit status 1
W0904 06:47:51.248783 1520716 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0904 06:47:51.248855 1520716 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1058075281/001/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (1.28s)

                                                
                                    
x
+
TestErrorSpam/setup (22.27s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-345312 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-345312 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-345312 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-345312 --driver=docker  --container-runtime=crio: (22.265514889s)
--- PASS: TestErrorSpam/setup (22.27s)

                                                
                                    
x
+
TestErrorSpam/start (0.58s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-345312 --log_dir /tmp/nospam-345312 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-345312 --log_dir /tmp/nospam-345312 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-345312 --log_dir /tmp/nospam-345312 start --dry-run
--- PASS: TestErrorSpam/start (0.58s)

                                                
                                    
x
+
TestErrorSpam/status (0.87s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-345312 --log_dir /tmp/nospam-345312 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-345312 --log_dir /tmp/nospam-345312 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-345312 --log_dir /tmp/nospam-345312 status
--- PASS: TestErrorSpam/status (0.87s)

                                                
                                    
x
+
TestErrorSpam/pause (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-345312 --log_dir /tmp/nospam-345312 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-345312 --log_dir /tmp/nospam-345312 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-345312 --log_dir /tmp/nospam-345312 pause
--- PASS: TestErrorSpam/pause (1.48s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.68s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-345312 --log_dir /tmp/nospam-345312 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-345312 --log_dir /tmp/nospam-345312 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-345312 --log_dir /tmp/nospam-345312 unpause
--- PASS: TestErrorSpam/unpause (1.68s)

                                                
                                    
x
+
TestErrorSpam/stop (1.35s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-345312 --log_dir /tmp/nospam-345312 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-345312 --log_dir /tmp/nospam-345312 stop: (1.171905318s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-345312 --log_dir /tmp/nospam-345312 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-345312 --log_dir /tmp/nospam-345312 stop
--- PASS: TestErrorSpam/stop (1.35s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21409-1516970/.minikube/files/etc/test/nested/copy/1520716/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (69.65s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-856205 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0904 06:07:57.344042 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:07:57.350444 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:07:57.361847 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:07:57.383256 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:07:57.424706 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:07:57.506162 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:07:57.667700 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:07:57.989417 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:07:58.631502 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:07:59.913442 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:08:02.476385 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:08:07.597952 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-856205 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m9.654355438s)
--- PASS: TestFunctional/serial/StartWithProxy (69.65s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (31.03s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0904 06:08:15.530193 1520716 config.go:182] Loaded profile config "functional-856205": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-856205 --alsologtostderr -v=8
E0904 06:08:17.839421 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:08:38.321738 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-856205 --alsologtostderr -v=8: (31.032242498s)
functional_test.go:678: soft start took 31.033010117s for "functional-856205" cluster.
I0904 06:08:46.562840 1520716 config.go:182] Loaded profile config "functional-856205": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (31.03s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-856205 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.77s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.77s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.93s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-856205 /tmp/TestFunctionalserialCacheCmdcacheadd_local2249295378/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 cache add minikube-local-cache-test:functional-856205
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 cache delete minikube-local-cache-test:functional-856205
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-856205
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.93s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-856205 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (261.400922ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 kubectl -- --context functional-856205 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-856205 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.23s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-856205 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0904 06:09:19.284141 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-856205 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.227401412s)
functional_test.go:776: restart took 32.227552885s for "functional-856205" cluster.
I0904 06:09:24.945258 1520716 config.go:182] Loaded profile config "functional-856205": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (32.23s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-856205 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-856205 logs: (1.34303565s)
--- PASS: TestFunctional/serial/LogsCmd (1.34s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 logs --file /tmp/TestFunctionalserialLogsFileCmd3350305324/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-856205 logs --file /tmp/TestFunctionalserialLogsFileCmd3350305324/001/logs.txt: (1.34051241s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.34s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.02s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-856205 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-856205
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-856205: exit status 115 (323.003204ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31955 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-856205 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.02s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-856205 config get cpus: exit status 14 (73.056475ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-856205 config get cpus: exit status 14 (53.568069ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-856205 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-856205 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (139.676448ms)

                                                
                                                
-- stdout --
	* [functional-856205] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-1516970/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1516970/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 06:09:58.652891 1562601 out.go:360] Setting OutFile to fd 1 ...
	I0904 06:09:58.653159 1562601 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:09:58.653174 1562601 out.go:374] Setting ErrFile to fd 2...
	I0904 06:09:58.653181 1562601 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:09:58.653380 1562601 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1516970/.minikube/bin
	I0904 06:09:58.653921 1562601 out.go:368] Setting JSON to false
	I0904 06:09:58.654910 1562601 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":13949,"bootTime":1756952250,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 06:09:58.655015 1562601 start.go:140] virtualization: kvm guest
	I0904 06:09:58.657166 1562601 out.go:179] * [functional-856205] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 06:09:58.658401 1562601 out.go:179]   - MINIKUBE_LOCATION=21409
	I0904 06:09:58.658423 1562601 notify.go:220] Checking for updates...
	I0904 06:09:58.660689 1562601 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 06:09:58.662138 1562601 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-1516970/kubeconfig
	I0904 06:09:58.663191 1562601 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1516970/.minikube
	I0904 06:09:58.664309 1562601 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 06:09:58.665329 1562601 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 06:09:58.666909 1562601 config.go:182] Loaded profile config "functional-856205": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 06:09:58.667567 1562601 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 06:09:58.689600 1562601 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0904 06:09:58.689682 1562601 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 06:09:58.737263 1562601 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-09-04 06:09:58.728627163 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 06:09:58.737360 1562601 docker.go:318] overlay module found
	I0904 06:09:58.739073 1562601 out.go:179] * Using the docker driver based on existing profile
	I0904 06:09:58.740056 1562601 start.go:304] selected driver: docker
	I0904 06:09:58.740070 1562601 start.go:918] validating driver "docker" against &{Name:functional-856205 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-856205 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 06:09:58.740156 1562601 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 06:09:58.742190 1562601 out.go:203] 
	W0904 06:09:58.743330 1562601 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0904 06:09:58.744410 1562601 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-856205 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-856205 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-856205 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (162.730092ms)

                                                
                                                
-- stdout --
	* [functional-856205] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-1516970/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1516970/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 06:09:59.010801 1562791 out.go:360] Setting OutFile to fd 1 ...
	I0904 06:09:59.011057 1562791 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:09:59.011068 1562791 out.go:374] Setting ErrFile to fd 2...
	I0904 06:09:59.011074 1562791 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:09:59.011411 1562791 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1516970/.minikube/bin
	I0904 06:09:59.011988 1562791 out.go:368] Setting JSON to false
	I0904 06:09:59.013235 1562791 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":13949,"bootTime":1756952250,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 06:09:59.013348 1562791 start.go:140] virtualization: kvm guest
	I0904 06:09:59.014926 1562791 out.go:179] * [functional-856205] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	I0904 06:09:59.016529 1562791 out.go:179]   - MINIKUBE_LOCATION=21409
	I0904 06:09:59.016531 1562791 notify.go:220] Checking for updates...
	I0904 06:09:59.019540 1562791 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 06:09:59.021121 1562791 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-1516970/kubeconfig
	I0904 06:09:59.022435 1562791 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1516970/.minikube
	I0904 06:09:59.023708 1562791 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 06:09:59.024941 1562791 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 06:09:59.026499 1562791 config.go:182] Loaded profile config "functional-856205": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 06:09:59.027039 1562791 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 06:09:59.051784 1562791 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0904 06:09:59.051905 1562791 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 06:09:59.107671 1562791 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-09-04 06:09:59.09700648 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 06:09:59.107787 1562791 docker.go:318] overlay module found
	I0904 06:09:59.110288 1562791 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0904 06:09:59.111655 1562791 start.go:304] selected driver: docker
	I0904 06:09:59.111672 1562791 start.go:918] validating driver "docker" against &{Name:functional-856205 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-856205 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 06:09:59.111789 1562791 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 06:09:59.114592 1562791 out.go:203] 
	W0904 06:09:59.115941 1562791 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0904 06:09:59.117320 1562791 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (33.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [4301fed2-ef03-4a37-82c5-d3f72eb63cb0] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005513088s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-856205 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-856205 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-856205 get pvc myclaim -o=json
I0904 06:09:38.117821 1520716 retry.go:31] will retry after 1.076743404s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:f27074b7-e881-4ddb-a075-aaee6a6b8317 ResourceVersion:720 Generation:0 CreationTimestamp:2025-09-04 06:09:38 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc0018c17b0 VolumeMode:0xc0018c17c0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-856205 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-856205 apply -f testdata/storage-provisioner/pod.yaml
I0904 06:09:39.518325 1520716 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [4dda51b9-85ff-48ab-8249-94bdbd63b3c3] Pending
helpers_test.go:352: "sp-pod" [4dda51b9-85ff-48ab-8249-94bdbd63b3c3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [4dda51b9-85ff-48ab-8249-94bdbd63b3c3] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.003291153s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-856205 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-856205 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-856205 delete -f testdata/storage-provisioner/pod.yaml: (1.140153614s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-856205 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [4ec293ed-f6e6-4274-92eb-8e719a45b812] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [4ec293ed-f6e6-4274-92eb-8e719a45b812] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003470967s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-856205 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (33.34s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 ssh -n functional-856205 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 cp functional-856205:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd484251483/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 ssh -n functional-856205 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 ssh -n functional-856205 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-856205 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-vfcgg" [54bff6db-779b-4972-85e1-dbb353c991c9] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-vfcgg" [54bff6db-779b-4972-85e1-dbb353c991c9] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 16.003791804s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-856205 exec mysql-5bb876957f-vfcgg -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-856205 exec mysql-5bb876957f-vfcgg -- mysql -ppassword -e "show databases;": exit status 1 (108.690969ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0904 06:09:51.418558 1520716 retry.go:31] will retry after 958.965598ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-856205 exec mysql-5bb876957f-vfcgg -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-856205 exec mysql-5bb876957f-vfcgg -- mysql -ppassword -e "show databases;": exit status 1 (103.068757ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0904 06:09:52.481623 1520716 retry.go:31] will retry after 1.332923875s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-856205 exec mysql-5bb876957f-vfcgg -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-856205 exec mysql-5bb876957f-vfcgg -- mysql -ppassword -e "show databases;": exit status 1 (107.386879ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0904 06:09:53.922961 1520716 retry.go:31] will retry after 2.686438305s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-856205 exec mysql-5bb876957f-vfcgg -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.67s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/1520716/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 ssh "sudo cat /etc/test/nested/copy/1520716/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/1520716.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 ssh "sudo cat /etc/ssl/certs/1520716.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/1520716.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 ssh "sudo cat /usr/share/ca-certificates/1520716.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/15207162.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 ssh "sudo cat /etc/ssl/certs/15207162.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/15207162.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 ssh "sudo cat /usr/share/ca-certificates/15207162.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-856205 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-856205 ssh "sudo systemctl is-active docker": exit status 1 (252.984309ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-856205 ssh "sudo systemctl is-active containerd": exit status 1 (263.145255ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-856205 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-856205
localhost/kicbase/echo-server:functional-856205
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-856205 image ls --format short --alsologtostderr:
I0904 06:10:08.198920 1563827 out.go:360] Setting OutFile to fd 1 ...
I0904 06:10:08.199185 1563827 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 06:10:08.199195 1563827 out.go:374] Setting ErrFile to fd 2...
I0904 06:10:08.199199 1563827 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 06:10:08.199443 1563827 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1516970/.minikube/bin
I0904 06:10:08.200094 1563827 config.go:182] Loaded profile config "functional-856205": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0904 06:10:08.200201 1563827 config.go:182] Loaded profile config "functional-856205": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0904 06:10:08.200587 1563827 cli_runner.go:164] Run: docker container inspect functional-856205 --format={{.State.Status}}
I0904 06:10:08.218167 1563827 ssh_runner.go:195] Run: systemctl --version
I0904 06:10:08.218220 1563827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-856205
I0904 06:10:08.235120 1563827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33969 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/functional-856205/id_rsa Username:docker}
I0904 06:10:08.324268 1563827 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-856205 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/kicbase/echo-server           │ functional-856205  │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.0            │ a0af72f2ec6d6 │ 76MB   │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/nginx                 │ latest             │ ad5708199ec7d │ 197MB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ registry.k8s.io/kube-proxy              │ v1.34.0            │ df0860106674d │ 73.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.0            │ 46169d968e920 │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ docker.io/library/nginx                 │ alpine             │ 4a86014ec6994 │ 53.9MB │
│ localhost/minikube-local-cache-test     │ functional-856205  │ 20e681504c81f │ 3.33kB │
│ localhost/my-image                      │ functional-856205  │ 0a9623b8dfc39 │ 1.47MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.0            │ 90550c43ad2bc │ 89.1MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-856205 image ls --format table --alsologtostderr:
I0904 06:10:11.373213 1564395 out.go:360] Setting OutFile to fd 1 ...
I0904 06:10:11.373322 1564395 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 06:10:11.373331 1564395 out.go:374] Setting ErrFile to fd 2...
I0904 06:10:11.373336 1564395 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 06:10:11.373510 1564395 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1516970/.minikube/bin
I0904 06:10:11.374079 1564395 config.go:182] Loaded profile config "functional-856205": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0904 06:10:11.374167 1564395 config.go:182] Loaded profile config "functional-856205": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0904 06:10:11.374514 1564395 cli_runner.go:164] Run: docker container inspect functional-856205 --format={{.State.Status}}
I0904 06:10:11.391410 1564395 ssh_runner.go:195] Run: systemctl --version
I0904 06:10:11.391458 1564395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-856205
I0904 06:10:11.409280 1564395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33969 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/functional-856205/id_rsa Username:docker}
I0904 06:10:11.492143 1564395 ssh_runner.go:195] Run: sudo crictl images --output json
E0904 06:10:41.206347 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:12:57.336372 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:13:25.048623 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-856205 image ls --format json --alsologtostderr:
[{"id":"89d11c1d883c6d66f446f62422450221477d3754f05afac1108fc781fa69002d","repoDigests":["docker.io/library/0ffbb21b039a088029747186b229667fc0b401ad9900a76ffade105f228f7928-tmp@sha256:03d07b28b15f9a28acfcae77aeece8d2b89b61c7b82c39ef53449d238b2e3778"],"repoTags":[],"size":"1465612"},{"id":"ad5708199ec7d169c6837fe46e1646603d0f7d0a0f54d3cd8d07bc1c818d0224","repoDigests":["docker.io/library/nginx@sha256:33e0bbc7ca9ecf108140af6288c7c9d1ecc77548cbfd3952fd8466a75edefe57","docker.io/library/nginx@sha256:f15190cd0aed34df2541e6a569d349858dd83fe2a519d7c0ec023133b6d3c4f7"],"repoTags":["docker.io/library/nginx:latest"],"size":"196544386"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-856205"],"size":"4943877"},{"id":"20e681504c81fddbff9e4f84f6b437e0b6ae43fde84b041e5ce16e1c46e54291","repoDigests":["localhost/minikube-
local-cache-test@sha256:18fb6417e5318ebe3169c388a97c3412d6fe86a3c4da5e0b1df563c09bb08a91"],"repoTags":["localhost/minikube-local-cache-test:functional-856205"],"size":"3330"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@
sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140","registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"53844823"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcb
cc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"0a9623b8dfc391d80d71825b853fc89aaae0d344aaa849e2cf1dd4c758643cde","repoDigests":["localhost/my-image@sha256:e556aca4756a7819817085e0cbf990e7a5ccd05b76a81e47f43c75a52dbf412e"],"repoTags":["localhost/my-image:functional-856205"],"size":"1468193"},{"id":"df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce","repoDigests":["registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067","registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af"]
,"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"73138071"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9","repoDigests":["docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8","docker.io/library/nginx@sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a"],"repoTags":["docker.io/library/nginx:alpine"],"size":"53949946"},{"id":"905
50c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","repoDigests":["registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86","registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"89050097"},{"id":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6","registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"76004183"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3
afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metri
cs-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-856205 image ls --format json --alsologtostderr:
I0904 06:10:11.162722 1564345 out.go:360] Setting OutFile to fd 1 ...
I0904 06:10:11.162970 1564345 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 06:10:11.162979 1564345 out.go:374] Setting ErrFile to fd 2...
I0904 06:10:11.162983 1564345 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 06:10:11.163194 1564345 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1516970/.minikube/bin
I0904 06:10:11.164374 1564345 config.go:182] Loaded profile config "functional-856205": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0904 06:10:11.164626 1564345 config.go:182] Loaded profile config "functional-856205": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0904 06:10:11.165414 1564345 cli_runner.go:164] Run: docker container inspect functional-856205 --format={{.State.Status}}
I0904 06:10:11.183297 1564345 ssh_runner.go:195] Run: systemctl --version
I0904 06:10:11.183351 1564345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-856205
I0904 06:10:11.199732 1564345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33969 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/functional-856205/id_rsa Username:docker}
I0904 06:10:11.288085 1564345 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-856205 image ls --format yaml --alsologtostderr:
- id: a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6
- registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "76004183"
- id: 46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140
- registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "53844823"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 20e681504c81fddbff9e4f84f6b437e0b6ae43fde84b041e5ce16e1c46e54291
repoDigests:
- localhost/minikube-local-cache-test@sha256:18fb6417e5318ebe3169c388a97c3412d6fe86a3c4da5e0b1df563c09bb08a91
repoTags:
- localhost/minikube-local-cache-test:functional-856205
size: "3330"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86
- registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "89050097"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce
repoDigests:
- registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067
- registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "73138071"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9
repoDigests:
- docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8
- docker.io/library/nginx@sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a
repoTags:
- docker.io/library/nginx:alpine
size: "53949946"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: ad5708199ec7d169c6837fe46e1646603d0f7d0a0f54d3cd8d07bc1c818d0224
repoDigests:
- docker.io/library/nginx@sha256:33e0bbc7ca9ecf108140af6288c7c9d1ecc77548cbfd3952fd8466a75edefe57
- docker.io/library/nginx@sha256:f15190cd0aed34df2541e6a569d349858dd83fe2a519d7c0ec023133b6d3c4f7
repoTags:
- docker.io/library/nginx:latest
size: "196544386"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-856205
size: "4943877"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-856205 image ls --format yaml --alsologtostderr:
I0904 06:10:08.416036 1563877 out.go:360] Setting OutFile to fd 1 ...
I0904 06:10:08.416181 1563877 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 06:10:08.416191 1563877 out.go:374] Setting ErrFile to fd 2...
I0904 06:10:08.416195 1563877 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 06:10:08.416413 1563877 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1516970/.minikube/bin
I0904 06:10:08.417005 1563877 config.go:182] Loaded profile config "functional-856205": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0904 06:10:08.417103 1563877 config.go:182] Loaded profile config "functional-856205": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0904 06:10:08.417480 1563877 cli_runner.go:164] Run: docker container inspect functional-856205 --format={{.State.Status}}
I0904 06:10:08.434964 1563877 ssh_runner.go:195] Run: systemctl --version
I0904 06:10:08.435009 1563877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-856205
I0904 06:10:08.451496 1563877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33969 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/functional-856205/id_rsa Username:docker}
I0904 06:10:08.536283 1563877 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-856205 ssh pgrep buildkitd: exit status 1 (242.823574ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 image build -t localhost/my-image:functional-856205 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-856205 image build -t localhost/my-image:functional-856205 testdata/build --alsologtostderr: (2.089080986s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-856205 image build -t localhost/my-image:functional-856205 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 89d11c1d883
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-856205
--> 0a9623b8dfc
Successfully tagged localhost/my-image:functional-856205
0a9623b8dfc391d80d71825b853fc89aaae0d344aaa849e2cf1dd4c758643cde
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-856205 image build -t localhost/my-image:functional-856205 testdata/build --alsologtostderr:
I0904 06:10:08.866977 1564023 out.go:360] Setting OutFile to fd 1 ...
I0904 06:10:08.867253 1564023 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 06:10:08.867265 1564023 out.go:374] Setting ErrFile to fd 2...
I0904 06:10:08.867272 1564023 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 06:10:08.867476 1564023 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1516970/.minikube/bin
I0904 06:10:08.868082 1564023 config.go:182] Loaded profile config "functional-856205": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0904 06:10:08.868804 1564023 config.go:182] Loaded profile config "functional-856205": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0904 06:10:08.869239 1564023 cli_runner.go:164] Run: docker container inspect functional-856205 --format={{.State.Status}}
I0904 06:10:08.886797 1564023 ssh_runner.go:195] Run: systemctl --version
I0904 06:10:08.886841 1564023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-856205
I0904 06:10:08.904081 1564023 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33969 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/functional-856205/id_rsa Username:docker}
I0904 06:10:08.988000 1564023 build_images.go:161] Building image from path: /tmp/build.4237137025.tar
I0904 06:10:08.988056 1564023 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0904 06:10:08.996363 1564023 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4237137025.tar
I0904 06:10:08.999425 1564023 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4237137025.tar: stat -c "%s %y" /var/lib/minikube/build/build.4237137025.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4237137025.tar': No such file or directory
I0904 06:10:08.999453 1564023 ssh_runner.go:362] scp /tmp/build.4237137025.tar --> /var/lib/minikube/build/build.4237137025.tar (3072 bytes)
I0904 06:10:09.021312 1564023 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4237137025
I0904 06:10:09.029121 1564023 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4237137025 -xf /var/lib/minikube/build/build.4237137025.tar
I0904 06:10:09.037466 1564023 crio.go:315] Building image: /var/lib/minikube/build/build.4237137025
I0904 06:10:09.037533 1564023 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-856205 /var/lib/minikube/build/build.4237137025 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0904 06:10:10.887007 1564023 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-856205 /var/lib/minikube/build/build.4237137025 --cgroup-manager=cgroupfs: (1.84944528s)
I0904 06:10:10.887068 1564023 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4237137025
I0904 06:10:10.895308 1564023 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4237137025.tar
I0904 06:10:10.903110 1564023 build_images.go:217] Built localhost/my-image:functional-856205 from /tmp/build.4237137025.tar
I0904 06:10:10.903139 1564023 build_images.go:133] succeeded building to: functional-856205
I0904 06:10:10.903143 1564023 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-856205
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 image load --daemon kicbase/echo-server:functional-856205 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-856205 image load --daemon kicbase/echo-server:functional-856205 --alsologtostderr: (1.004764775s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 image load --daemon kicbase/echo-server:functional-856205 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-856205
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 image load --daemon kicbase/echo-server:functional-856205 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 image save kicbase/echo-server:functional-856205 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:395: (dbg) Done: out/minikube-linux-amd64 -p functional-856205 image save kicbase/echo-server:functional-856205 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (1.283232663s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 image rm kicbase/echo-server:functional-856205 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-856205 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (2.816347545s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-856205
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 image save --daemon kicbase/echo-server:functional-856205 --alsologtostderr
functional_test.go:439: (dbg) Done: out/minikube-linux-amd64 -p functional-856205 image save --daemon kicbase/echo-server:functional-856205 --alsologtostderr: (2.5104496s)
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-856205
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.55s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "603.189733ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "59.121232ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "554.235955ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "60.248268ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-856205 /tmp/TestFunctionalparallelMountCmdany-port2162687112/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1756966187988731409" to /tmp/TestFunctionalparallelMountCmdany-port2162687112/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1756966187988731409" to /tmp/TestFunctionalparallelMountCmdany-port2162687112/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1756966187988731409" to /tmp/TestFunctionalparallelMountCmdany-port2162687112/001/test-1756966187988731409
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-856205 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (316.604056ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0904 06:09:48.305649 1520716 retry.go:31] will retry after 620.294425ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep  4 06:09 created-by-test
-rw-r--r-- 1 docker docker 24 Sep  4 06:09 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep  4 06:09 test-1756966187988731409
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 ssh cat /mount-9p/test-1756966187988731409
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-856205 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [a4011601-61c2-4c73-8fba-237c9f41d0a5] Pending
helpers_test.go:352: "busybox-mount" [a4011601-61c2-4c73-8fba-237c9f41d0a5] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [a4011601-61c2-4c73-8fba-237c9f41d0a5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [a4011601-61c2-4c73-8fba-237c9f41d0a5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003475353s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-856205 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-856205 /tmp/TestFunctionalparallelMountCmdany-port2162687112/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.83s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-856205 /tmp/TestFunctionalparallelMountCmdspecific-port2495191835/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-856205 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (255.24445ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0904 06:09:55.071059 1520716 retry.go:31] will retry after 590.197003ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-856205 /tmp/TestFunctionalparallelMountCmdspecific-port2495191835/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-856205 ssh "sudo umount -f /mount-9p": exit status 1 (244.66643ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-856205 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-856205 /tmp/TestFunctionalparallelMountCmdspecific-port2495191835/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-856205 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1643592570/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-856205 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1643592570/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-856205 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1643592570/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-856205 ssh "findmnt -T" /mount1: exit status 1 (367.33207ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0904 06:09:56.955114 1520716 retry.go:31] will retry after 584.961254ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-856205 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-856205 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1643592570/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-856205 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1643592570/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-856205 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1643592570/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-856205 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-856205 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-856205 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 1562039: os: process already finished
helpers_test.go:525: unable to kill pid 1561857: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-856205 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-856205 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-856205 apply -f testdata/testsvc.yaml
I0904 06:09:57.949213 1520716 detect.go:223] nested VM detected
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [ca75f482-940a-4f19-911d-a59faa68390e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [ca75f482-940a-4f19-911d-a59faa68390e] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003668467s
I0904 06:10:07.117533 1520716 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.30s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-856205 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.85.195 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-856205 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-856205 service list: (1.670977331s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-856205 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-856205 service list -o json: (1.67771488s)
functional_test.go:1504: Took "1.677821247s" to run "out/minikube-linux-amd64 -p functional-856205 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.68s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-856205
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-856205
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-856205
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (186.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0904 06:22:57.337097 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-188231 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m6.044206507s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (186.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-188231 kubectl -- rollout status deployment/busybox: (3.93441132s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 kubectl -- exec busybox-7b57f96db7-bfs2n -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 kubectl -- exec busybox-7b57f96db7-mwcj5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 kubectl -- exec busybox-7b57f96db7-s4xdg -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 kubectl -- exec busybox-7b57f96db7-bfs2n -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 kubectl -- exec busybox-7b57f96db7-mwcj5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 kubectl -- exec busybox-7b57f96db7-s4xdg -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 kubectl -- exec busybox-7b57f96db7-bfs2n -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 kubectl -- exec busybox-7b57f96db7-mwcj5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 kubectl -- exec busybox-7b57f96db7-s4xdg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 kubectl -- exec busybox-7b57f96db7-bfs2n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 kubectl -- exec busybox-7b57f96db7-bfs2n -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 kubectl -- exec busybox-7b57f96db7-mwcj5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 kubectl -- exec busybox-7b57f96db7-mwcj5 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 kubectl -- exec busybox-7b57f96db7-s4xdg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 kubectl -- exec busybox-7b57f96db7-s4xdg -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (54.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-188231 node add --alsologtostderr -v 5: (53.347128131s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (54.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-188231 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 cp testdata/cp-test.txt ha-188231:/home/docker/cp-test.txt
E0904 06:24:20.409971 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 ssh -n ha-188231 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 cp ha-188231:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1851419095/001/cp-test_ha-188231.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 ssh -n ha-188231 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 cp ha-188231:/home/docker/cp-test.txt ha-188231-m02:/home/docker/cp-test_ha-188231_ha-188231-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 ssh -n ha-188231 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 ssh -n ha-188231-m02 "sudo cat /home/docker/cp-test_ha-188231_ha-188231-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 cp ha-188231:/home/docker/cp-test.txt ha-188231-m03:/home/docker/cp-test_ha-188231_ha-188231-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 ssh -n ha-188231 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 ssh -n ha-188231-m03 "sudo cat /home/docker/cp-test_ha-188231_ha-188231-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 cp ha-188231:/home/docker/cp-test.txt ha-188231-m04:/home/docker/cp-test_ha-188231_ha-188231-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 ssh -n ha-188231 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 ssh -n ha-188231-m04 "sudo cat /home/docker/cp-test_ha-188231_ha-188231-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 cp testdata/cp-test.txt ha-188231-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 ssh -n ha-188231-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 cp ha-188231-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1851419095/001/cp-test_ha-188231-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 ssh -n ha-188231-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 cp ha-188231-m02:/home/docker/cp-test.txt ha-188231:/home/docker/cp-test_ha-188231-m02_ha-188231.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 ssh -n ha-188231-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 ssh -n ha-188231 "sudo cat /home/docker/cp-test_ha-188231-m02_ha-188231.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 cp ha-188231-m02:/home/docker/cp-test.txt ha-188231-m03:/home/docker/cp-test_ha-188231-m02_ha-188231-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 ssh -n ha-188231-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 ssh -n ha-188231-m03 "sudo cat /home/docker/cp-test_ha-188231-m02_ha-188231-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 cp ha-188231-m02:/home/docker/cp-test.txt ha-188231-m04:/home/docker/cp-test_ha-188231-m02_ha-188231-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 ssh -n ha-188231-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 ssh -n ha-188231-m04 "sudo cat /home/docker/cp-test_ha-188231-m02_ha-188231-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 cp testdata/cp-test.txt ha-188231-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 ssh -n ha-188231-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 cp ha-188231-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1851419095/001/cp-test_ha-188231-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 ssh -n ha-188231-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 cp ha-188231-m03:/home/docker/cp-test.txt ha-188231:/home/docker/cp-test_ha-188231-m03_ha-188231.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 ssh -n ha-188231-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 ssh -n ha-188231 "sudo cat /home/docker/cp-test_ha-188231-m03_ha-188231.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 cp ha-188231-m03:/home/docker/cp-test.txt ha-188231-m02:/home/docker/cp-test_ha-188231-m03_ha-188231-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 ssh -n ha-188231-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 ssh -n ha-188231-m02 "sudo cat /home/docker/cp-test_ha-188231-m03_ha-188231-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 cp ha-188231-m03:/home/docker/cp-test.txt ha-188231-m04:/home/docker/cp-test_ha-188231-m03_ha-188231-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 ssh -n ha-188231-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 ssh -n ha-188231-m04 "sudo cat /home/docker/cp-test_ha-188231-m03_ha-188231-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 cp testdata/cp-test.txt ha-188231-m04:/home/docker/cp-test.txt
E0904 06:24:31.715365 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/functional-856205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:24:31.721744 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/functional-856205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:24:31.733119 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/functional-856205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:24:31.754465 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/functional-856205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 ssh -n ha-188231-m04 "sudo cat /home/docker/cp-test.txt"
E0904 06:24:31.796568 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/functional-856205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:24:31.878021 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/functional-856205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 cp ha-188231-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1851419095/001/cp-test_ha-188231-m04.txt
E0904 06:24:32.040151 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/functional-856205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 ssh -n ha-188231-m04 "sudo cat /home/docker/cp-test.txt"
E0904 06:24:32.361999 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/functional-856205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 cp ha-188231-m04:/home/docker/cp-test.txt ha-188231:/home/docker/cp-test_ha-188231-m04_ha-188231.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 ssh -n ha-188231-m04 "sudo cat /home/docker/cp-test.txt"
E0904 06:24:33.003643 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/functional-856205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 ssh -n ha-188231 "sudo cat /home/docker/cp-test_ha-188231-m04_ha-188231.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 cp ha-188231-m04:/home/docker/cp-test.txt ha-188231-m02:/home/docker/cp-test_ha-188231-m04_ha-188231-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 ssh -n ha-188231-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 ssh -n ha-188231-m02 "sudo cat /home/docker/cp-test_ha-188231-m04_ha-188231-m02.txt"
E0904 06:24:34.285157 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/functional-856205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 cp ha-188231-m04:/home/docker/cp-test.txt ha-188231-m03:/home/docker/cp-test_ha-188231-m04_ha-188231-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 ssh -n ha-188231-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 ssh -n ha-188231-m03 "sudo cat /home/docker/cp-test_ha-188231-m04_ha-188231-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 node stop m02 --alsologtostderr -v 5
E0904 06:24:36.847322 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/functional-856205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:24:41.968646 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/functional-856205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-188231 node stop m02 --alsologtostderr -v 5: (11.858386142s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-188231 status --alsologtostderr -v 5: exit status 7 (657.594714ms)

                                                
                                                
-- stdout --
	ha-188231
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-188231-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-188231-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-188231-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 06:24:47.078085 1590547 out.go:360] Setting OutFile to fd 1 ...
	I0904 06:24:47.078353 1590547 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:24:47.078370 1590547 out.go:374] Setting ErrFile to fd 2...
	I0904 06:24:47.078374 1590547 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:24:47.078611 1590547 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1516970/.minikube/bin
	I0904 06:24:47.078950 1590547 out.go:368] Setting JSON to false
	I0904 06:24:47.079002 1590547 mustload.go:65] Loading cluster: ha-188231
	I0904 06:24:47.079117 1590547 notify.go:220] Checking for updates...
	I0904 06:24:47.079494 1590547 config.go:182] Loaded profile config "ha-188231": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 06:24:47.079518 1590547 status.go:174] checking status of ha-188231 ...
	I0904 06:24:47.080111 1590547 cli_runner.go:164] Run: docker container inspect ha-188231 --format={{.State.Status}}
	I0904 06:24:47.098886 1590547 status.go:371] ha-188231 host status = "Running" (err=<nil>)
	I0904 06:24:47.098909 1590547 host.go:66] Checking if "ha-188231" exists ...
	I0904 06:24:47.099262 1590547 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-188231
	I0904 06:24:47.117104 1590547 host.go:66] Checking if "ha-188231" exists ...
	I0904 06:24:47.117440 1590547 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 06:24:47.117496 1590547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-188231
	I0904 06:24:47.134919 1590547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33974 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/ha-188231/id_rsa Username:docker}
	I0904 06:24:47.237044 1590547 ssh_runner.go:195] Run: systemctl --version
	I0904 06:24:47.241607 1590547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 06:24:47.253381 1590547 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 06:24:47.300856 1590547 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-09-04 06:24:47.292228267 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 06:24:47.301352 1590547 kubeconfig.go:125] found "ha-188231" server: "https://192.168.49.254:8443"
	I0904 06:24:47.301381 1590547 api_server.go:166] Checking apiserver status ...
	I0904 06:24:47.301416 1590547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 06:24:47.312117 1590547 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1510/cgroup
	I0904 06:24:47.320774 1590547 api_server.go:182] apiserver freezer: "12:freezer:/docker/00d63b2bf04a845ed004f2931d066121b6cadff47d8d947e2a939a6aef2bcba0/crio/crio-f9e46a3837e9eafcb7bd313c6ec6d08188cb6d489c75cffc6181130b906dc78b"
	I0904 06:24:47.320831 1590547 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/00d63b2bf04a845ed004f2931d066121b6cadff47d8d947e2a939a6aef2bcba0/crio/crio-f9e46a3837e9eafcb7bd313c6ec6d08188cb6d489c75cffc6181130b906dc78b/freezer.state
	I0904 06:24:47.328934 1590547 api_server.go:204] freezer state: "THAWED"
	I0904 06:24:47.328958 1590547 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0904 06:24:47.333422 1590547 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0904 06:24:47.333452 1590547 status.go:463] ha-188231 apiserver status = Running (err=<nil>)
	I0904 06:24:47.333469 1590547 status.go:176] ha-188231 status: &{Name:ha-188231 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 06:24:47.333505 1590547 status.go:174] checking status of ha-188231-m02 ...
	I0904 06:24:47.333757 1590547 cli_runner.go:164] Run: docker container inspect ha-188231-m02 --format={{.State.Status}}
	I0904 06:24:47.351319 1590547 status.go:371] ha-188231-m02 host status = "Stopped" (err=<nil>)
	I0904 06:24:47.351342 1590547 status.go:384] host is not running, skipping remaining checks
	I0904 06:24:47.351351 1590547 status.go:176] ha-188231-m02 status: &{Name:ha-188231-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 06:24:47.351392 1590547 status.go:174] checking status of ha-188231-m03 ...
	I0904 06:24:47.351641 1590547 cli_runner.go:164] Run: docker container inspect ha-188231-m03 --format={{.State.Status}}
	I0904 06:24:47.369673 1590547 status.go:371] ha-188231-m03 host status = "Running" (err=<nil>)
	I0904 06:24:47.369725 1590547 host.go:66] Checking if "ha-188231-m03" exists ...
	I0904 06:24:47.370121 1590547 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-188231-m03
	I0904 06:24:47.388078 1590547 host.go:66] Checking if "ha-188231-m03" exists ...
	I0904 06:24:47.388357 1590547 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 06:24:47.388393 1590547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-188231-m03
	I0904 06:24:47.406002 1590547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33984 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/ha-188231-m03/id_rsa Username:docker}
	I0904 06:24:47.493230 1590547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 06:24:47.504148 1590547 kubeconfig.go:125] found "ha-188231" server: "https://192.168.49.254:8443"
	I0904 06:24:47.504176 1590547 api_server.go:166] Checking apiserver status ...
	I0904 06:24:47.504207 1590547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 06:24:47.514132 1590547 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1480/cgroup
	I0904 06:24:47.522692 1590547 api_server.go:182] apiserver freezer: "12:freezer:/docker/9324107ed8f078465ab4d54f313d871c44c7cf50eb53956c47db3230d900a8e3/crio/crio-019b024858e17c7315eae5532aeb12ba04372c203194df4875c9c53731963723"
	I0904 06:24:47.522751 1590547 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9324107ed8f078465ab4d54f313d871c44c7cf50eb53956c47db3230d900a8e3/crio/crio-019b024858e17c7315eae5532aeb12ba04372c203194df4875c9c53731963723/freezer.state
	I0904 06:24:47.531468 1590547 api_server.go:204] freezer state: "THAWED"
	I0904 06:24:47.531502 1590547 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0904 06:24:47.535584 1590547 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0904 06:24:47.535613 1590547 status.go:463] ha-188231-m03 apiserver status = Running (err=<nil>)
	I0904 06:24:47.535625 1590547 status.go:176] ha-188231-m03 status: &{Name:ha-188231-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 06:24:47.535640 1590547 status.go:174] checking status of ha-188231-m04 ...
	I0904 06:24:47.535996 1590547 cli_runner.go:164] Run: docker container inspect ha-188231-m04 --format={{.State.Status}}
	I0904 06:24:47.553861 1590547 status.go:371] ha-188231-m04 host status = "Running" (err=<nil>)
	I0904 06:24:47.553889 1590547 host.go:66] Checking if "ha-188231-m04" exists ...
	I0904 06:24:47.554210 1590547 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-188231-m04
	I0904 06:24:47.572351 1590547 host.go:66] Checking if "ha-188231-m04" exists ...
	I0904 06:24:47.572670 1590547 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 06:24:47.572724 1590547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-188231-m04
	I0904 06:24:47.590238 1590547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33989 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/ha-188231-m04/id_rsa Username:docker}
	I0904 06:24:47.676583 1590547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 06:24:47.686770 1590547 status.go:176] ha-188231-m04 status: &{Name:ha-188231-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (30.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 node start m02 --alsologtostderr -v 5
E0904 06:24:52.210485 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/functional-856205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:25:12.692189 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/functional-856205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-188231 node start m02 --alsologtostderr -v 5: (29.492713559s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (30.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.007713726s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (128.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-188231 stop --alsologtostderr -v 5: (30.753270859s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 start --wait true --alsologtostderr -v 5
E0904 06:25:53.655102 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/functional-856205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:27:15.577073 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/functional-856205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-188231 start --wait true --alsologtostderr -v 5: (1m37.529935414s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (128.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (15.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-188231 node delete m03 --alsologtostderr -v 5: (14.63971348s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (15.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 stop --alsologtostderr -v 5
E0904 06:27:57.337083 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-188231 stop --alsologtostderr -v 5: (35.484267954s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-188231 status --alsologtostderr -v 5: exit status 7 (107.654206ms)

                                                
                                                
-- stdout --
	ha-188231
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-188231-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-188231-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 06:28:19.837593 1608181 out.go:360] Setting OutFile to fd 1 ...
	I0904 06:28:19.837943 1608181 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:28:19.837957 1608181 out.go:374] Setting ErrFile to fd 2...
	I0904 06:28:19.837962 1608181 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:28:19.838203 1608181 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1516970/.minikube/bin
	I0904 06:28:19.838393 1608181 out.go:368] Setting JSON to false
	I0904 06:28:19.838422 1608181 mustload.go:65] Loading cluster: ha-188231
	I0904 06:28:19.838460 1608181 notify.go:220] Checking for updates...
	I0904 06:28:19.838780 1608181 config.go:182] Loaded profile config "ha-188231": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 06:28:19.838803 1608181 status.go:174] checking status of ha-188231 ...
	I0904 06:28:19.839204 1608181 cli_runner.go:164] Run: docker container inspect ha-188231 --format={{.State.Status}}
	I0904 06:28:19.858773 1608181 status.go:371] ha-188231 host status = "Stopped" (err=<nil>)
	I0904 06:28:19.858818 1608181 status.go:384] host is not running, skipping remaining checks
	I0904 06:28:19.858826 1608181 status.go:176] ha-188231 status: &{Name:ha-188231 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 06:28:19.858866 1608181 status.go:174] checking status of ha-188231-m02 ...
	I0904 06:28:19.859155 1608181 cli_runner.go:164] Run: docker container inspect ha-188231-m02 --format={{.State.Status}}
	I0904 06:28:19.876938 1608181 status.go:371] ha-188231-m02 host status = "Stopped" (err=<nil>)
	I0904 06:28:19.876964 1608181 status.go:384] host is not running, skipping remaining checks
	I0904 06:28:19.876971 1608181 status.go:176] ha-188231-m02 status: &{Name:ha-188231-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 06:28:19.876997 1608181 status.go:174] checking status of ha-188231-m04 ...
	I0904 06:28:19.877292 1608181 cli_runner.go:164] Run: docker container inspect ha-188231-m04 --format={{.State.Status}}
	I0904 06:28:19.894593 1608181 status.go:371] ha-188231-m04 host status = "Stopped" (err=<nil>)
	I0904 06:28:19.894615 1608181 status.go:384] host is not running, skipping remaining checks
	I0904 06:28:19.894623 1608181 status.go:176] ha-188231-m04 status: &{Name:ha-188231-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (73.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0904 06:29:31.715386 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/functional-856205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-188231 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m13.176562468s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (73.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (78.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 node add --control-plane --alsologtostderr -v 5
E0904 06:29:59.419997 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/functional-856205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-188231 node add --control-plane --alsologtostderr -v 5: (1m17.530295595s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-188231 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (78.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.83s)

                                                
                                    
x
+
TestJSONOutput/start/Command (69.23s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-137173 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-137173 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m9.224789154s)
--- PASS: TestJSONOutput/start/Command (69.23s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-137173 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-137173 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.79s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-137173 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-137173 --output=json --user=testUser: (5.787278172s)
--- PASS: TestJSONOutput/stop/Command (5.79s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-644210 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-644210 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (67.614502ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"16585d43-40a3-4f51-9fde-9ba91e29591e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-644210] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f9302565-cb7b-4be2-9939-2cc52891afa4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21409"}}
	{"specversion":"1.0","id":"f16e4980-ecbe-4bb6-89d2-b9a63d0196ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a864ce70-4bcf-4676-984e-be25519a82a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21409-1516970/kubeconfig"}}
	{"specversion":"1.0","id":"2912b67a-c3cf-412a-99e5-45e3d2635e46","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1516970/.minikube"}}
	{"specversion":"1.0","id":"c2f45335-a384-4993-b775-88b4efb6618e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"40acd55a-0843-45af-91d1-afb709e7c679","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e5a4d599-eae6-4ef2-ad0c-4ff7aafcf6b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-644210" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-644210
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (33.24s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-234515 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-234515 --network=: (31.146814571s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-234515" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-234515
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-234515: (2.07312831s)
--- PASS: TestKicCustomNetwork/create_custom_network (33.24s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (25.18s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-128221 --network=bridge
E0904 06:32:57.337090 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-128221 --network=bridge: (23.227150978s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-128221" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-128221
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-128221: (1.929155142s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (25.18s)

                                                
                                    
x
+
TestKicExistingNetwork (26.43s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0904 06:33:21.241859 1520716 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0904 06:33:21.259094 1520716 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0904 06:33:21.259180 1520716 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0904 06:33:21.259199 1520716 cli_runner.go:164] Run: docker network inspect existing-network
W0904 06:33:21.276165 1520716 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0904 06:33:21.276207 1520716 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0904 06:33:21.276222 1520716 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0904 06:33:21.276380 1520716 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0904 06:33:21.293368 1520716 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6a5bc02d2a27 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:b0:fb:06:b8:46} reservation:<nil>}
I0904 06:33:21.293940 1520716 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00210e860}
I0904 06:33:21.293979 1520716 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0904 06:33:21.294035 1520716 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0904 06:33:21.344202 1520716 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-228648 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-228648 --network=existing-network: (24.32353962s)
helpers_test.go:175: Cleaning up "existing-network-228648" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-228648
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-228648: (1.965157162s)
I0904 06:33:47.650922 1520716 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (26.43s)

                                                
                                    
x
+
TestKicCustomSubnet (27.58s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-277822 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-277822 --subnet=192.168.60.0/24: (25.466314363s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-277822 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-277822" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-277822
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-277822: (2.095577658s)
--- PASS: TestKicCustomSubnet (27.58s)

                                                
                                    
x
+
TestKicStaticIP (26.15s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-331820 --static-ip=192.168.200.200
E0904 06:34:31.716033 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/functional-856205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-331820 --static-ip=192.168.200.200: (23.93496033s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-331820 ip
helpers_test.go:175: Cleaning up "static-ip-331820" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-331820
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-331820: (2.081052012s)
--- PASS: TestKicStaticIP (26.15s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (54.84s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-920491 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-920491 --driver=docker  --container-runtime=crio: (22.945389092s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-951194 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-951194 --driver=docker  --container-runtime=crio: (26.682778077s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-920491
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-951194
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-951194" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-951194
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-951194: (1.825631183s)
helpers_test.go:175: Cleaning up "first-920491" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-920491
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-920491: (2.209892282s)
--- PASS: TestMinikubeProfile (54.84s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.05s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-965198 --memory=3072 --mount-string /tmp/TestMountStartserial3757903553/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-965198 --memory=3072 --mount-string /tmp/TestMountStartserial3757903553/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.048794274s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-965198 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.16s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-978696 --memory=3072 --mount-string /tmp/TestMountStartserial3757903553/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-978696 --memory=3072 --mount-string /tmp/TestMountStartserial3757903553/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.158951202s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-978696 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.59s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-965198 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-965198 --alsologtostderr -v=5: (1.59427958s)
--- PASS: TestMountStart/serial/DeleteFirst (1.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-978696 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-978696
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-978696: (1.181733232s)
--- PASS: TestMountStart/serial/Stop (1.18s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.16s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-978696
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-978696: (6.15870506s)
--- PASS: TestMountStart/serial/RestartStopped (7.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-978696 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (123.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-375763 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E0904 06:37:57.337131 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-375763 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m2.615876841s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-375763 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (123.06s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-375763 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-375763 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-375763 -- rollout status deployment/busybox: (3.265196449s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-375763 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-375763 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-375763 -- exec busybox-7b57f96db7-brgtc -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-375763 -- exec busybox-7b57f96db7-pfbdg -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-375763 -- exec busybox-7b57f96db7-brgtc -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-375763 -- exec busybox-7b57f96db7-pfbdg -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-375763 -- exec busybox-7b57f96db7-brgtc -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-375763 -- exec busybox-7b57f96db7-pfbdg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.66s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-375763 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-375763 -- exec busybox-7b57f96db7-brgtc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-375763 -- exec busybox-7b57f96db7-brgtc -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-375763 -- exec busybox-7b57f96db7-pfbdg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-375763 -- exec busybox-7b57f96db7-pfbdg -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (56.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-375763 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-375763 -v=5 --alsologtostderr: (55.742899065s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-375763 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (56.33s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-375763 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.61s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-375763 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-375763 cp testdata/cp-test.txt multinode-375763:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-375763 ssh -n multinode-375763 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-375763 cp multinode-375763:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3995588877/001/cp-test_multinode-375763.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-375763 ssh -n multinode-375763 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-375763 cp multinode-375763:/home/docker/cp-test.txt multinode-375763-m02:/home/docker/cp-test_multinode-375763_multinode-375763-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-375763 ssh -n multinode-375763 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-375763 ssh -n multinode-375763-m02 "sudo cat /home/docker/cp-test_multinode-375763_multinode-375763-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-375763 cp multinode-375763:/home/docker/cp-test.txt multinode-375763-m03:/home/docker/cp-test_multinode-375763_multinode-375763-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-375763 ssh -n multinode-375763 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-375763 ssh -n multinode-375763-m03 "sudo cat /home/docker/cp-test_multinode-375763_multinode-375763-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-375763 cp testdata/cp-test.txt multinode-375763-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-375763 ssh -n multinode-375763-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-375763 cp multinode-375763-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3995588877/001/cp-test_multinode-375763-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-375763 ssh -n multinode-375763-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-375763 cp multinode-375763-m02:/home/docker/cp-test.txt multinode-375763:/home/docker/cp-test_multinode-375763-m02_multinode-375763.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-375763 ssh -n multinode-375763-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-375763 ssh -n multinode-375763 "sudo cat /home/docker/cp-test_multinode-375763-m02_multinode-375763.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-375763 cp multinode-375763-m02:/home/docker/cp-test.txt multinode-375763-m03:/home/docker/cp-test_multinode-375763-m02_multinode-375763-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-375763 ssh -n multinode-375763-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-375763 ssh -n multinode-375763-m03 "sudo cat /home/docker/cp-test_multinode-375763-m02_multinode-375763-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-375763 cp testdata/cp-test.txt multinode-375763-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-375763 ssh -n multinode-375763-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-375763 cp multinode-375763-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3995588877/001/cp-test_multinode-375763-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-375763 ssh -n multinode-375763-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-375763 cp multinode-375763-m03:/home/docker/cp-test.txt multinode-375763:/home/docker/cp-test_multinode-375763-m03_multinode-375763.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-375763 ssh -n multinode-375763-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-375763 ssh -n multinode-375763 "sudo cat /home/docker/cp-test_multinode-375763-m03_multinode-375763.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-375763 cp multinode-375763-m03:/home/docker/cp-test.txt multinode-375763-m02:/home/docker/cp-test_multinode-375763-m03_multinode-375763-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-375763 ssh -n multinode-375763-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-375763 ssh -n multinode-375763-m02 "sudo cat /home/docker/cp-test_multinode-375763-m03_multinode-375763-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.83s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-375763 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-375763 node stop m03: (1.176660833s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-375763 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-375763 status: exit status 7 (459.179076ms)

                                                
                                                
-- stdout --
	multinode-375763
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-375763-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-375763-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-375763 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-375763 status --alsologtostderr: exit status 7 (457.279175ms)

                                                
                                                
-- stdout --
	multinode-375763
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-375763-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-375763-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 06:39:21.180743 1673231 out.go:360] Setting OutFile to fd 1 ...
	I0904 06:39:21.181029 1673231 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:39:21.181043 1673231 out.go:374] Setting ErrFile to fd 2...
	I0904 06:39:21.181048 1673231 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:39:21.181223 1673231 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1516970/.minikube/bin
	I0904 06:39:21.181394 1673231 out.go:368] Setting JSON to false
	I0904 06:39:21.181425 1673231 mustload.go:65] Loading cluster: multinode-375763
	I0904 06:39:21.181540 1673231 notify.go:220] Checking for updates...
	I0904 06:39:21.181845 1673231 config.go:182] Loaded profile config "multinode-375763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 06:39:21.181871 1673231 status.go:174] checking status of multinode-375763 ...
	I0904 06:39:21.182486 1673231 cli_runner.go:164] Run: docker container inspect multinode-375763 --format={{.State.Status}}
	I0904 06:39:21.201465 1673231 status.go:371] multinode-375763 host status = "Running" (err=<nil>)
	I0904 06:39:21.201501 1673231 host.go:66] Checking if "multinode-375763" exists ...
	I0904 06:39:21.201762 1673231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-375763
	I0904 06:39:21.218495 1673231 host.go:66] Checking if "multinode-375763" exists ...
	I0904 06:39:21.218766 1673231 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 06:39:21.218813 1673231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-375763
	I0904 06:39:21.236167 1673231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34094 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/multinode-375763/id_rsa Username:docker}
	I0904 06:39:21.320903 1673231 ssh_runner.go:195] Run: systemctl --version
	I0904 06:39:21.324846 1673231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 06:39:21.335613 1673231 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 06:39:21.382747 1673231 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-09-04 06:39:21.374143844 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 06:39:21.383266 1673231 kubeconfig.go:125] found "multinode-375763" server: "https://192.168.67.2:8443"
	I0904 06:39:21.383294 1673231 api_server.go:166] Checking apiserver status ...
	I0904 06:39:21.383338 1673231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 06:39:21.393497 1673231 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1491/cgroup
	I0904 06:39:21.402074 1673231 api_server.go:182] apiserver freezer: "12:freezer:/docker/15b8efd3085aae4f47e3a08ef5b700d7a424b8b36713214eae1e43162979f54e/crio/crio-91e319195d530f53c5f40f6d6e7dc3ebb16d7f179e94d8c3d7c1bf7aa67a7d61"
	I0904 06:39:21.402130 1673231 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/15b8efd3085aae4f47e3a08ef5b700d7a424b8b36713214eae1e43162979f54e/crio/crio-91e319195d530f53c5f40f6d6e7dc3ebb16d7f179e94d8c3d7c1bf7aa67a7d61/freezer.state
	I0904 06:39:21.410331 1673231 api_server.go:204] freezer state: "THAWED"
	I0904 06:39:21.410359 1673231 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0904 06:39:21.415136 1673231 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0904 06:39:21.415160 1673231 status.go:463] multinode-375763 apiserver status = Running (err=<nil>)
	I0904 06:39:21.415174 1673231 status.go:176] multinode-375763 status: &{Name:multinode-375763 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 06:39:21.415194 1673231 status.go:174] checking status of multinode-375763-m02 ...
	I0904 06:39:21.415441 1673231 cli_runner.go:164] Run: docker container inspect multinode-375763-m02 --format={{.State.Status}}
	I0904 06:39:21.433762 1673231 status.go:371] multinode-375763-m02 host status = "Running" (err=<nil>)
	I0904 06:39:21.433787 1673231 host.go:66] Checking if "multinode-375763-m02" exists ...
	I0904 06:39:21.434069 1673231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-375763-m02
	I0904 06:39:21.452131 1673231 host.go:66] Checking if "multinode-375763-m02" exists ...
	I0904 06:39:21.452462 1673231 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 06:39:21.452512 1673231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-375763-m02
	I0904 06:39:21.469195 1673231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34099 SSHKeyPath:/home/jenkins/minikube-integration/21409-1516970/.minikube/machines/multinode-375763-m02/id_rsa Username:docker}
	I0904 06:39:21.556956 1673231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 06:39:21.567499 1673231 status.go:176] multinode-375763-m02 status: &{Name:multinode-375763-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0904 06:39:21.567532 1673231 status.go:174] checking status of multinode-375763-m03 ...
	I0904 06:39:21.567777 1673231 cli_runner.go:164] Run: docker container inspect multinode-375763-m03 --format={{.State.Status}}
	I0904 06:39:21.585005 1673231 status.go:371] multinode-375763-m03 host status = "Stopped" (err=<nil>)
	I0904 06:39:21.585029 1673231 status.go:384] host is not running, skipping remaining checks
	I0904 06:39:21.585038 1673231 status.go:176] multinode-375763-m03 status: &{Name:multinode-375763-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.09s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-375763 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-375763 node start m03 -v=5 --alsologtostderr: (6.661521435s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-375763 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.31s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (72.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-375763
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-375763
E0904 06:39:31.715955 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/functional-856205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-375763: (24.673018507s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-375763 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-375763 --wait=true -v=5 --alsologtostderr: (48.047867993s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-375763
--- PASS: TestMultiNode/serial/RestartKeepsNodes (72.82s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-375763 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-375763 node delete m03: (4.62873877s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-375763 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.19s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-375763 stop
E0904 06:40:54.783747 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/functional-856205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:41:00.412327 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-375763 stop: (23.605026701s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-375763 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-375763 status: exit status 7 (84.649872ms)

                                                
                                                
-- stdout --
	multinode-375763
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-375763-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-375763 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-375763 status --alsologtostderr: exit status 7 (84.014162ms)

                                                
                                                
-- stdout --
	multinode-375763
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-375763-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 06:41:10.642751 1682854 out.go:360] Setting OutFile to fd 1 ...
	I0904 06:41:10.643011 1682854 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:41:10.643021 1682854 out.go:374] Setting ErrFile to fd 2...
	I0904 06:41:10.643025 1682854 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:41:10.643671 1682854 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1516970/.minikube/bin
	I0904 06:41:10.643968 1682854 out.go:368] Setting JSON to false
	I0904 06:41:10.644013 1682854 mustload.go:65] Loading cluster: multinode-375763
	I0904 06:41:10.644059 1682854 notify.go:220] Checking for updates...
	I0904 06:41:10.644917 1682854 config.go:182] Loaded profile config "multinode-375763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 06:41:10.644949 1682854 status.go:174] checking status of multinode-375763 ...
	I0904 06:41:10.645483 1682854 cli_runner.go:164] Run: docker container inspect multinode-375763 --format={{.State.Status}}
	I0904 06:41:10.663535 1682854 status.go:371] multinode-375763 host status = "Stopped" (err=<nil>)
	I0904 06:41:10.663568 1682854 status.go:384] host is not running, skipping remaining checks
	I0904 06:41:10.663575 1682854 status.go:176] multinode-375763 status: &{Name:multinode-375763 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 06:41:10.663598 1682854 status.go:174] checking status of multinode-375763-m02 ...
	I0904 06:41:10.663898 1682854 cli_runner.go:164] Run: docker container inspect multinode-375763-m02 --format={{.State.Status}}
	I0904 06:41:10.680417 1682854 status.go:371] multinode-375763-m02 host status = "Stopped" (err=<nil>)
	I0904 06:41:10.680439 1682854 status.go:384] host is not running, skipping remaining checks
	I0904 06:41:10.680445 1682854 status.go:176] multinode-375763-m02 status: &{Name:multinode-375763-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.77s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-375763 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-375763 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (50.151162659s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-375763 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (50.71s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-375763
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-375763-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-375763-m02 --driver=docker  --container-runtime=crio: exit status 14 (71.494849ms)

                                                
                                                
-- stdout --
	* [multinode-375763-m02] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-1516970/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1516970/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-375763-m02' is duplicated with machine name 'multinode-375763-m02' in profile 'multinode-375763'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-375763-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-375763-m03 --driver=docker  --container-runtime=crio: (21.282123479s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-375763
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-375763: exit status 80 (277.605454ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-375763 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-375763-m03 already exists in multinode-375763-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-375763-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-375763-m03: (1.833263717s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.52s)

                                                
                                    
x
+
TestPreload (114.69s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-599741 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E0904 06:42:57.338057 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-599741 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (52.788932611s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-599741 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-599741 image pull gcr.io/k8s-minikube/busybox: (2.36027553s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-599741
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-599741: (5.949149119s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-599741 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-599741 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (51.098981623s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-599741 image list
helpers_test.go:175: Cleaning up "test-preload-599741" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-599741
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-599741: (2.281917673s)
--- PASS: TestPreload (114.69s)

                                                
                                    
x
+
TestScheduledStopUnix (97.76s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-339604 --memory=3072 --driver=docker  --container-runtime=crio
E0904 06:44:31.715987 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/functional-856205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-339604 --memory=3072 --driver=docker  --container-runtime=crio: (22.099460347s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-339604 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-339604 -n scheduled-stop-339604
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-339604 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0904 06:44:45.995472 1520716 retry.go:31] will retry after 91.721µs: open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/scheduled-stop-339604/pid: no such file or directory
I0904 06:44:45.996654 1520716 retry.go:31] will retry after 194.84µs: open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/scheduled-stop-339604/pid: no such file or directory
I0904 06:44:45.997833 1520716 retry.go:31] will retry after 150.951µs: open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/scheduled-stop-339604/pid: no such file or directory
I0904 06:44:45.999006 1520716 retry.go:31] will retry after 403.107µs: open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/scheduled-stop-339604/pid: no such file or directory
I0904 06:44:46.000163 1520716 retry.go:31] will retry after 465.62µs: open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/scheduled-stop-339604/pid: no such file or directory
I0904 06:44:46.001336 1520716 retry.go:31] will retry after 893.98µs: open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/scheduled-stop-339604/pid: no such file or directory
I0904 06:44:46.002493 1520716 retry.go:31] will retry after 611.39µs: open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/scheduled-stop-339604/pid: no such file or directory
I0904 06:44:46.003671 1520716 retry.go:31] will retry after 2.10674ms: open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/scheduled-stop-339604/pid: no such file or directory
I0904 06:44:46.006881 1520716 retry.go:31] will retry after 2.36491ms: open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/scheduled-stop-339604/pid: no such file or directory
I0904 06:44:46.010125 1520716 retry.go:31] will retry after 2.935805ms: open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/scheduled-stop-339604/pid: no such file or directory
I0904 06:44:46.013377 1520716 retry.go:31] will retry after 5.942347ms: open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/scheduled-stop-339604/pid: no such file or directory
I0904 06:44:46.019619 1520716 retry.go:31] will retry after 8.011679ms: open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/scheduled-stop-339604/pid: no such file or directory
I0904 06:44:46.028104 1520716 retry.go:31] will retry after 14.831972ms: open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/scheduled-stop-339604/pid: no such file or directory
I0904 06:44:46.043347 1520716 retry.go:31] will retry after 27.908519ms: open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/scheduled-stop-339604/pid: no such file or directory
I0904 06:44:46.071613 1520716 retry.go:31] will retry after 33.598752ms: open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/scheduled-stop-339604/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-339604 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-339604 -n scheduled-stop-339604
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-339604
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-339604 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-339604
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-339604: exit status 7 (70.913263ms)

                                                
                                                
-- stdout --
	scheduled-stop-339604
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-339604 -n scheduled-stop-339604
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-339604 -n scheduled-stop-339604: exit status 7 (67.919038ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-339604" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-339604
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-339604: (4.32649069s)
--- PASS: TestScheduledStopUnix (97.76s)

                                                
                                    
x
+
TestInsufficientStorage (12.25s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-203101 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-203101 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (9.956014937s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"838be4b0-7713-479a-a9e7-dcf7a58a5056","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-203101] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2b0496e6-b508-4e3f-b539-fc639f4886ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21409"}}
	{"specversion":"1.0","id":"ca77bff8-d77c-4d1c-9d10-92c9786e6f67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2ec3cea9-fe45-48bc-9cd1-d6029c4f1a11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21409-1516970/kubeconfig"}}
	{"specversion":"1.0","id":"d1e66c68-5d9e-4f50-aaa7-a89490704209","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1516970/.minikube"}}
	{"specversion":"1.0","id":"5245cac4-a3d1-4805-bc90-b8176e394ae4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"ca08183b-2830-45e2-9af7-3fef1396f51b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"fff13eab-adef-483f-8ebc-a4664d8ca7a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"4904d872-45d0-42f8-8ee3-de0fa2fb2769","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"84f1b9b9-c766-4003-af2b-e492b308139a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"db3ccd80-d7c3-4eba-ad10-4238b60f3de2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"24a32ec7-65ca-40f9-8ab7-dbd05857dc31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-203101\" primary control-plane node in \"insufficient-storage-203101\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"fbd94c9c-04cd-4d9f-af06-08adc8ea0a0e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.47-1756936034-21409 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"ff023dc2-cf6b-4b63-a268-20abb9541420","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"61e826e5-9c7b-4bb6-8f1c-561652a63d99","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-203101 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-203101 --output=json --layout=cluster: exit status 7 (256.356561ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-203101","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-203101","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0904 06:46:11.453926 1704716 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-203101" does not appear in /home/jenkins/minikube-integration/21409-1516970/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-203101 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-203101 --output=json --layout=cluster: exit status 7 (254.270906ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-203101","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-203101","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0904 06:46:11.708776 1704814 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-203101" does not appear in /home/jenkins/minikube-integration/21409-1516970/kubeconfig
	E0904 06:46:11.718667 1704814 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/insufficient-storage-203101/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-203101" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-203101
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-203101: (1.786316986s)
--- PASS: TestInsufficientStorage (12.25s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (45.85s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.689054210 start -p running-upgrade-151990 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.689054210 start -p running-upgrade-151990 --memory=3072 --vm-driver=docker  --container-runtime=crio: (29.513369923s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-151990 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-151990 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (13.935344147s)
helpers_test.go:175: Cleaning up "running-upgrade-151990" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-151990
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-151990: (1.907141205s)
--- PASS: TestRunningBinaryUpgrade (45.85s)

                                                
                                    
x
+
TestKubernetesUpgrade (321.21s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-892549 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-892549 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.634542934s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-892549
I0904 06:47:51.384661 1520716 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1058075281/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440] Decompressors:map[bz2:0xc000013e50 gz:0xc000013e58 tar:0xc000013de0 tar.bz2:0xc000013df0 tar.gz:0xc000013e10 tar.xz:0xc000013e20 tar.zst:0xc000013e40 tbz2:0xc000013df0 tgz:0xc000013e10 txz:0xc000013e20 tzst:0xc000013e40 xz:0xc000013e90 zip:0xc000013ec0 zst:0xc000013e98] Getters:map[file:0xc0015c09b0 http:0xc000888410 https:0xc000888460] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0904 06:47:51.384708 1520716 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1058075281/001/docker-machine-driver-kvm2
I0904 06:47:51.911405 1520716 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0904 06:47:51.911504 1520716 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0904 06:47:51.944146 1520716 install.go:137] /home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0904 06:47:51.944185 1520716 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0904 06:47:51.944364 1520716 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0904 06:47:51.944409 1520716 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1058075281/002/docker-machine-driver-kvm2
I0904 06:47:51.969468 1520716 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1058075281/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440] Decompressors:map[bz2:0xc000013e50 gz:0xc000013e58 tar:0xc000013de0 tar.bz2:0xc000013df0 tar.gz:0xc000013e10 tar.xz:0xc000013e20 tar.zst:0xc000013e40 tbz2:0xc000013df0 tgz:0xc000013e10 txz:0xc000013e20 tzst:0xc000013e40 xz:0xc000013e90 zip:0xc000013ec0 zst:0xc000013e98] Getters:map[file:0xc000cda210 http:0xc00087e280 https:0xc00087e2d0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0904 06:47:51.969511 1520716 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1058075281/002/docker-machine-driver-kvm2
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-892549: (1.246555039s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-892549 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-892549 status --format={{.Host}}: exit status 7 (103.641106ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-892549 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-892549 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m30.127507428s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-892549 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-892549 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-892549 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (91.836325ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-892549] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-1516970/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1516970/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-892549
	    minikube start -p kubernetes-upgrade-892549 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8925492 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-892549 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-892549 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-892549 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (19.825975815s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-892549" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-892549
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-892549: (2.095390412s)
--- PASS: TestKubernetesUpgrade (321.21s)

                                                
                                    
x
+
TestMissingContainerUpgrade (97.69s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.4101194739 start -p missing-upgrade-741889 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.4101194739 start -p missing-upgrade-741889 --memory=3072 --driver=docker  --container-runtime=crio: (49.011949796s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-741889
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-741889
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-741889 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-741889 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (44.973926551s)
helpers_test.go:175: Cleaning up "missing-upgrade-741889" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-741889
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-741889: (2.286543417s)
--- PASS: TestMissingContainerUpgrade (97.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-504557 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-504557 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (74.709323ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-504557] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-1516970/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1516970/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.48s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (48.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-504557 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-504557 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (48.119898359s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-504557 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (48.62s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (66.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.798755085 start -p stopped-upgrade-679309 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.798755085 start -p stopped-upgrade-679309 --memory=3072 --vm-driver=docker  --container-runtime=crio: (50.302387283s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.798755085 -p stopped-upgrade-679309 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.798755085 -p stopped-upgrade-679309 stop: (1.206257284s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-679309 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-679309 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (15.300788664s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (66.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (6.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-504557 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-504557 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (4.064066448s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-504557 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-504557 status -o json: exit status 2 (285.810953ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-504557","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-504557
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-504557: (1.851099201s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (6.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (4.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-504557 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-504557 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (4.64897122s)
--- PASS: TestNoKubernetes/serial/Start (4.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-504557 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-504557 "sudo systemctl is-active --quiet service kubelet": exit status 1 (267.638878ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (6.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:181: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (5.525362487s)
--- PASS: TestNoKubernetes/serial/ProfileList (6.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-504557
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-504557: (1.203140861s)
--- PASS: TestNoKubernetes/serial/Stop (1.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-504557 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-504557 --driver=docker  --container-runtime=crio: (7.729518399s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.73s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-679309
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-504557 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-504557 "sudo systemctl is-active --quiet service kubelet": exit status 1 (307.999239ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-444288 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-444288 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (175.028166ms)

                                                
                                                
-- stdout --
	* [false-444288] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-1516970/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1516970/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 06:47:54.368378 1735678 out.go:360] Setting OutFile to fd 1 ...
	I0904 06:47:54.368570 1735678 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:47:54.368582 1735678 out.go:374] Setting ErrFile to fd 2...
	I0904 06:47:54.368588 1735678 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:47:54.368783 1735678 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1516970/.minikube/bin
	I0904 06:47:54.369407 1735678 out.go:368] Setting JSON to false
	I0904 06:47:54.370562 1735678 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":16224,"bootTime":1756952250,"procs":249,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 06:47:54.370628 1735678 start.go:140] virtualization: kvm guest
	I0904 06:47:54.372636 1735678 out.go:179] * [false-444288] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 06:47:54.374103 1735678 out.go:179]   - MINIKUBE_LOCATION=21409
	I0904 06:47:54.374143 1735678 notify.go:220] Checking for updates...
	I0904 06:47:54.376498 1735678 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 06:47:54.377644 1735678 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-1516970/kubeconfig
	I0904 06:47:54.378538 1735678 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1516970/.minikube
	I0904 06:47:54.379658 1735678 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 06:47:54.380707 1735678 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 06:47:54.382373 1735678 config.go:182] Loaded profile config "force-systemd-env-254309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 06:47:54.382481 1735678 config.go:182] Loaded profile config "kubernetes-upgrade-892549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 06:47:54.382580 1735678 config.go:182] Loaded profile config "running-upgrade-151990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I0904 06:47:54.382700 1735678 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 06:47:54.414204 1735678 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0904 06:47:54.414300 1735678 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 06:47:54.469032 1735678 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:82 SystemTime:2025-09-04 06:47:54.457239017 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1083-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx
Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0904 06:47:54.469226 1735678 docker.go:318] overlay module found
	I0904 06:47:54.471147 1735678 out.go:179] * Using the docker driver based on user configuration
	I0904 06:47:54.472460 1735678 start.go:304] selected driver: docker
	I0904 06:47:54.472477 1735678 start.go:918] validating driver "docker" against <nil>
	I0904 06:47:54.472494 1735678 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 06:47:54.474700 1735678 out.go:203] 
	W0904 06:47:54.476117 1735678 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0904 06:47:54.477283 1735678 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-444288 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-444288

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-444288

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-444288

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-444288

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-444288

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-444288

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-444288

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-444288

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-444288

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-444288

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444288"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444288"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444288"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-444288

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444288"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444288"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-444288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-444288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-444288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-444288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-444288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-444288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-444288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-444288" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444288"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444288"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444288"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444288"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444288"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-444288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-444288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-444288" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444288"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444288"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444288"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444288"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444288"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 04 Sep 2025 06:47:54 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.85.2:8443
name: force-systemd-env-254309
contexts:
- context:
cluster: force-systemd-env-254309
extensions:
- extension:
last-update: Thu, 04 Sep 2025 06:47:54 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: context_info
namespace: default
user: force-systemd-env-254309
name: force-systemd-env-254309
current-context: force-systemd-env-254309
kind: Config
preferences: {}
users:
- name: force-systemd-env-254309
user:
client-certificate: /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/force-systemd-env-254309/client.crt
client-key: /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/force-systemd-env-254309/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-444288

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444288"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444288"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444288"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444288"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444288"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444288"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444288"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444288"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444288"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444288"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444288"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444288"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444288"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444288"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444288"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444288"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444288"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444288"

                                                
                                                
----------------------- debugLogs end: false-444288 [took: 5.064443067s] --------------------------------
helpers_test.go:175: Cleaning up "false-444288" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-444288
--- PASS: TestNetworkPlugins/group/false (5.87s)

                                                
                                    
x
+
TestPause/serial/Start (74.87s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-543860 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-543860 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m14.864911492s)
--- PASS: TestPause/serial/Start (74.87s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (24.52s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-543860 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-543860 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (24.502034026s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (24.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (50.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-869290 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E0904 06:49:31.715971 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/functional-856205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-869290 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (50.386104398s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (50.39s)

                                                
                                    
x
+
TestPause/serial/Pause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-543860 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.74s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-543860 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-543860 --output=json --layout=cluster: exit status 2 (331.514677ms)

                                                
                                                
-- stdout --
	{"Name":"pause-543860","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-543860","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.33s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.61s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-543860 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.61s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.74s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-543860 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.74s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.64s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-543860 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-543860 --alsologtostderr -v=5: (2.644166531s)
--- PASS: TestPause/serial/DeletePaused (2.64s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (4.77s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.719513607s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-543860
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-543860: exit status 1 (16.629908ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-543860: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (4.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (51.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-574576 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-574576 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (51.386381624s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (51.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-869290 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9dde5c2f-e9a5-4ffa-b81e-15feae35b318] Pending
helpers_test.go:352: "busybox" [9dde5c2f-e9a5-4ffa-b81e-15feae35b318] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [9dde5c2f-e9a5-4ffa-b81e-15feae35b318] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.00329203s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-869290 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-869290 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-869290 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.187682763s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-869290 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-869290 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-869290 --alsologtostderr -v=3: (11.915665059s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-574576 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [7c7ab0ff-b169-411f-8ac4-0ef53d1660a8] Pending
helpers_test.go:352: "busybox" [7c7ab0ff-b169-411f-8ac4-0ef53d1660a8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [7c7ab0ff-b169-411f-8ac4-0ef53d1660a8] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004194924s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-574576 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-869290 -n old-k8s-version-869290
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-869290 -n old-k8s-version-869290: exit status 7 (69.054529ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-869290 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (52.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-869290 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-869290 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (51.826662784s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-869290 -n old-k8s-version-869290
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (52.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-574576 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-574576 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-574576 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-574576 --alsologtostderr -v=3: (11.925640273s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-574576 -n no-preload-574576
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-574576 -n no-preload-574576: exit status 7 (72.610825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-574576 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (48.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-574576 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-574576 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (48.123532754s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-574576 -n no-preload-574576
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (48.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (75.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-589812 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-589812 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m15.282221451s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (75.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-520775 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0904 06:52:57.336542 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-520775 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (42.120754189s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-589812 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [87b04f8e-1c66-414b-b0a1-06733f1911cb] Pending
helpers_test.go:352: "busybox" [87b04f8e-1c66-414b-b0a1-06733f1911cb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [87b04f8e-1c66-414b-b0a1-06733f1911cb] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004273791s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-589812 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-589812 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-589812 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-589812 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-589812 --alsologtostderr -v=3: (11.820846725s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-520775 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [40d7a149-eb04-46a9-a7b3-f13967cfc683] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [40d7a149-eb04-46a9-a7b3-f13967cfc683] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.002982488s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-520775 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-520775 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-520775 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-520775 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-520775 --alsologtostderr -v=3: (11.946673485s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-589812 -n embed-certs-589812
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-589812 -n embed-certs-589812: exit status 7 (78.984483ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-589812 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (47.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-589812 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-589812 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (47.500034298s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-589812 -n embed-certs-589812
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (47.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-520775 -n default-k8s-diff-port-520775
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-520775 -n default-k8s-diff-port-520775: exit status 7 (81.29829ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-520775 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-520775 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-520775 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (48.564945364s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-520775 -n default-k8s-diff-port-520775
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-869290 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-869290 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-869290 -n old-k8s-version-869290
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-869290 -n old-k8s-version-869290: exit status 2 (293.375332ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-869290 -n old-k8s-version-869290
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-869290 -n old-k8s-version-869290: exit status 2 (302.579959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-869290 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-869290 -n old-k8s-version-869290
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-869290 -n old-k8s-version-869290
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (30.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-179620 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-179620 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (30.908565908s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (30.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-574576 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-574576 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-574576 -n no-preload-574576
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-574576 -n no-preload-574576: exit status 2 (304.830501ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-574576 -n no-preload-574576
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-574576 -n no-preload-574576: exit status 2 (300.5757ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-574576 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-574576 -n no-preload-574576
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-574576 -n no-preload-574576
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (73.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-444288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-444288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m13.773027921s)
--- PASS: TestNetworkPlugins/group/auto/Start (73.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-179620 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-179620 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-179620 --alsologtostderr -v=3: (1.199647551s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-179620 -n newest-cni-179620
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-179620 -n newest-cni-179620: exit status 7 (70.419067ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-179620 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (14.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-179620 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0904 07:10:20.041685 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/old-k8s-version-869290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:20.048133 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/old-k8s-version-869290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:20.059639 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/old-k8s-version-869290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:20.081309 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/old-k8s-version-869290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:20.123438 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/old-k8s-version-869290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:20.205314 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/old-k8s-version-869290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:20.366919 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/old-k8s-version-869290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:20.688739 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/old-k8s-version-869290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:21.330410 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/old-k8s-version-869290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:22.612005 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/old-k8s-version-869290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:25.173584 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/old-k8s-version-869290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:30.295873 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/old-k8s-version-869290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-179620 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (14.185199138s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-179620 -n newest-cni-179620
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (14.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-179620 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-179620 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-179620 -n newest-cni-179620
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-179620 -n newest-cni-179620: exit status 2 (298.669007ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-179620 -n newest-cni-179620
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-179620 -n newest-cni-179620: exit status 2 (292.603056ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-179620 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-179620 -n newest-cni-179620
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-179620 -n newest-cni-179620
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (73.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-444288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0904 07:10:40.537484 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/old-k8s-version-869290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:41.349632 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/no-preload-574576/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:41.356040 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/no-preload-574576/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:41.367516 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/no-preload-574576/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:41.388954 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/no-preload-574576/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:41.430419 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/no-preload-574576/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:41.511987 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/no-preload-574576/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:41.674331 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/no-preload-574576/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:41.995726 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/no-preload-574576/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:42.637219 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/no-preload-574576/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:43.919527 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/no-preload-574576/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:46.481723 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/no-preload-574576/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:51.603236 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/no-preload-574576/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:11:01.019551 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/old-k8s-version-869290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:11:01.845571 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/no-preload-574576/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-444288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m13.090216976s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (73.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-444288 "pgrep -a kubelet"
I0904 07:11:16.803026 1520716 config.go:182] Loaded profile config "auto-444288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-444288 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gxjkv" [e9fc9dbe-0960-433e-96b4-0102fda21c37] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-gxjkv" [e9fc9dbe-0960-433e-96b4-0102fda21c37] Running
E0904 07:11:22.327392 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/no-preload-574576/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003940274s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-444288 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-444288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-444288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-gmc54" [4589ad44-3cb3-47b4-b459-cac2d358434e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003663029s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-444288 "pgrep -a kubelet"
I0904 07:11:58.236396 1520716 config.go:182] Loaded profile config "kindnet-444288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-444288 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-dxncf" [3c722475-e259-4a98-b0f6-4d84dc3e7eaf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-dxncf" [3c722475-e259-4a98-b0f6-4d84dc3e7eaf] Running
E0904 07:12:03.289235 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/no-preload-574576/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003705159s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-444288 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-444288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-444288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (48.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-444288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-444288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (48.488053445s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (48.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-589812 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-589812 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-589812 -n embed-certs-589812
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-589812 -n embed-certs-589812: exit status 2 (290.914788ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-589812 -n embed-certs-589812
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-589812 -n embed-certs-589812: exit status 2 (311.050078ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-589812 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-589812 -n embed-certs-589812
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-589812 -n embed-certs-589812
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (69.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-444288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-444288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m9.068390907s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (69.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-520775 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-520775 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-520775 -n default-k8s-diff-port-520775
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-520775 -n default-k8s-diff-port-520775: exit status 2 (308.786914ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-520775 -n default-k8s-diff-port-520775
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-520775 -n default-k8s-diff-port-520775: exit status 2 (331.760989ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-520775 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-520775 -n default-k8s-diff-port-520775
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-520775 -n default-k8s-diff-port-520775
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.00s)
E0904 07:14:20.416445 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (55.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-444288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0904 07:12:57.336735 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/addons-306757/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:13:03.904022 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/old-k8s-version-869290/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-444288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (55.384144717s)
--- PASS: TestNetworkPlugins/group/flannel/Start (55.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-444288 "pgrep -a kubelet"
I0904 07:13:15.652185 1520716 config.go:182] Loaded profile config "custom-flannel-444288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-444288 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-frff7" [04521086-ccc3-4bba-a421-ecb29888e8ba] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-frff7" [04521086-ccc3-4bba-a421-ecb29888e8ba] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004384007s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-444288 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-444288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-444288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (69.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-444288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-444288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m9.193827707s)
--- PASS: TestNetworkPlugins/group/bridge/Start (69.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-fpnd4" [f9337640-f1ff-401b-b49b-01f08742a440] Running
E0904 07:13:47.776774 1520716 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/default-k8s-diff-port-520775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004229516s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-444288 "pgrep -a kubelet"
I0904 07:13:48.211136 1520716 config.go:182] Loaded profile config "enable-default-cni-444288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-444288 replace --force -f testdata/netcat-deployment.yaml
I0904 07:13:48.723001 1520716 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I0904 07:13:48.741054 1520716 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-m8n76" [d33f9c20-fce5-4585-8a62-8cd4af274fcf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-m8n76" [d33f9c20-fce5-4585-8a62-8cd4af274fcf] Running
I0904 07:13:53.151104 1520716 config.go:182] Loaded profile config "flannel-444288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003131393s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-444288 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-444288 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bpvkl" [ddbb0fe8-65b0-440a-8c25-b19c14ead8fd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-bpvkl" [ddbb0fe8-65b0-440a-8c25-b19c14ead8fd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003763917s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-444288 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-444288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-444288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-444288 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-444288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-444288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-444288 "pgrep -a kubelet"
I0904 07:14:53.510618 1520716 config.go:182] Loaded profile config "bridge-444288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-444288 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hzdsf" [a7f9162c-14bb-4eeb-a6af-7b8cd73c88b8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hzdsf" [a7f9162c-14bb-4eeb-a6af-7b8cd73c88b8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003995542s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-444288 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-444288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-444288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    

Test skip (27/326)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.27s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-306757 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.27s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-444288 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-444288

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-444288

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-444288

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-444288

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-444288

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-444288

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-444288

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-444288

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-444288

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-444288

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444288"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444288"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444288"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-444288

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444288"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444288"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-444288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-444288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-444288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-444288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-444288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-444288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-444288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-444288" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444288"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444288"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444288"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444288"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444288"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-444288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-444288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-444288" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444288"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444288"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444288"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444288"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444288"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21409-1516970/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 04 Sep 2025 06:47:49 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.103.2:8443
name: kubernetes-upgrade-892549
contexts:
- context:
cluster: kubernetes-upgrade-892549
extensions:
- extension:
last-update: Thu, 04 Sep 2025 06:47:49 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: context_info
namespace: default
user: kubernetes-upgrade-892549
name: kubernetes-upgrade-892549
current-context: kubernetes-upgrade-892549
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-892549
user:
client-certificate: /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/kubernetes-upgrade-892549/client.crt
client-key: /home/jenkins/minikube-integration/21409-1516970/.minikube/profiles/kubernetes-upgrade-892549/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-444288

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444288"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444288"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444288"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444288"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444288"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444288"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444288"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444288"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444288"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444288"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444288"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444288"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444288"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444288"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444288"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444288"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444288"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444288"

                                                
                                                
----------------------- debugLogs end: kubenet-444288 [took: 4.285348811s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-444288" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-444288
--- SKIP: TestNetworkPlugins/group/kubenet (4.46s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-393542" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-393542
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (7.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-444288 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-444288

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-444288

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-444288

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-444288

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-444288

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-444288

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-444288

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-444288

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-444288

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-444288

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444288"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444288"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444288"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-444288

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444288"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444288"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-444288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-444288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-444288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-444288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-444288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-444288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-444288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-444288" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444288"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444288"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444288"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444288"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444288"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-444288

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-444288

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-444288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-444288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-444288

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-444288

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-444288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-444288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-444288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-444288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-444288" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444288"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444288"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444288"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444288"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444288"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-444288

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444288"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444288"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444288"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444288"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444288"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444288"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444288"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444288"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444288"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444288"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444288"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444288"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444288"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444288"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444288"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444288"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444288"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-444288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444288"

                                                
                                                
----------------------- debugLogs end: cilium-444288 [took: 6.667809184s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-444288" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-444288
--- SKIP: TestNetworkPlugins/group/cilium (7.11s)

                                                
                                    
Copied to clipboard