Test Report: Docker_Linux_crio 21642

                    
                      14b81faeac061460adc41f1c17794999a5c5cccd:2025-09-26:41636
                    
                

Test fail (11/326)

x
+
TestAddons/parallel/Ingress (156.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-341571 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-341571 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-341571 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [9f6e2838-a48c-4925-8f6e-4b9e5d8d6b20] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [9f6e2838-a48c-4925-8f6e-4b9e5d8d6b20] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003100254s
I0926 22:33:02.545771  212137 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-341571 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-341571 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m15.993609339s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-341571 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-341571 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-341571
helpers_test.go:243: (dbg) docker inspect addons-341571:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d1ade4be5b671691a8188598363181f1a7da4606523250a71ca007e79f7639aa",
	        "Created": "2025-09-26T22:29:54.314756164Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 214076,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-26T22:29:54.350871526Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/d1ade4be5b671691a8188598363181f1a7da4606523250a71ca007e79f7639aa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d1ade4be5b671691a8188598363181f1a7da4606523250a71ca007e79f7639aa/hostname",
	        "HostsPath": "/var/lib/docker/containers/d1ade4be5b671691a8188598363181f1a7da4606523250a71ca007e79f7639aa/hosts",
	        "LogPath": "/var/lib/docker/containers/d1ade4be5b671691a8188598363181f1a7da4606523250a71ca007e79f7639aa/d1ade4be5b671691a8188598363181f1a7da4606523250a71ca007e79f7639aa-json.log",
	        "Name": "/addons-341571",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-341571:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-341571",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d1ade4be5b671691a8188598363181f1a7da4606523250a71ca007e79f7639aa",
	                "LowerDir": "/var/lib/docker/overlay2/5fde8956246e0eee6464a4feab5a3bb88e5704957dcf321197257d3a5d04300c-init/diff:/var/lib/docker/overlay2/539bc53ffef0f27e9ac4c376a14359e91e4b4c4b56d5675ca6caeaaab94e33fb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5fde8956246e0eee6464a4feab5a3bb88e5704957dcf321197257d3a5d04300c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5fde8956246e0eee6464a4feab5a3bb88e5704957dcf321197257d3a5d04300c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5fde8956246e0eee6464a4feab5a3bb88e5704957dcf321197257d3a5d04300c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-341571",
	                "Source": "/var/lib/docker/volumes/addons-341571/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-341571",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-341571",
	                "name.minikube.sigs.k8s.io": "addons-341571",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1d2ae213625c6dd5ffc0b3766ea23b29ab267deb1045bb6f8a2514beee7bef48",
	            "SandboxKey": "/var/run/docker/netns/1d2ae213625c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-341571": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:d2:2c:54:65:dc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "34edea4374e55c9a34f0cb14d8f1a84f068407a71b766ac08d09f8f2ff5ec081",
	                    "EndpointID": "0b3f51ea5133f8f210355a9e0bc66fb1d059ace5bdd02aa4d0bd4c230cce4e60",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-341571",
	                        "d1ade4be5b67"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-341571 -n addons-341571
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-341571 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-341571 logs -n 25: (1.280689374s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-540712 --alsologtostderr --binary-mirror http://127.0.0.1:37555 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-540712 │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │                     │
	│ delete  │ -p binary-mirror-540712                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-540712 │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │ 26 Sep 25 22:29 UTC │
	│ addons  │ disable dashboard -p addons-341571                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-341571        │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │                     │
	│ addons  │ enable dashboard -p addons-341571                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-341571        │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │                     │
	│ start   │ -p addons-341571 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-341571        │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │ 26 Sep 25 22:32 UTC │
	│ addons  │ addons-341571 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-341571        │ jenkins │ v1.37.0 │ 26 Sep 25 22:32 UTC │ 26 Sep 25 22:32 UTC │
	│ addons  │ addons-341571 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-341571        │ jenkins │ v1.37.0 │ 26 Sep 25 22:32 UTC │ 26 Sep 25 22:32 UTC │
	│ addons  │ enable headlamp -p addons-341571 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-341571        │ jenkins │ v1.37.0 │ 26 Sep 25 22:32 UTC │ 26 Sep 25 22:32 UTC │
	│ addons  │ addons-341571 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-341571        │ jenkins │ v1.37.0 │ 26 Sep 25 22:32 UTC │ 26 Sep 25 22:32 UTC │
	│ ssh     │ addons-341571 ssh cat /opt/local-path-provisioner/pvc-49272039-2e34-4194-b5f2-2d7b41dce849_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-341571        │ jenkins │ v1.37.0 │ 26 Sep 25 22:32 UTC │ 26 Sep 25 22:32 UTC │
	│ addons  │ addons-341571 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-341571        │ jenkins │ v1.37.0 │ 26 Sep 25 22:32 UTC │ 26 Sep 25 22:33 UTC │
	│ addons  │ addons-341571 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-341571        │ jenkins │ v1.37.0 │ 26 Sep 25 22:32 UTC │ 26 Sep 25 22:32 UTC │
	│ ip      │ addons-341571 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-341571        │ jenkins │ v1.37.0 │ 26 Sep 25 22:32 UTC │ 26 Sep 25 22:32 UTC │
	│ addons  │ addons-341571 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-341571        │ jenkins │ v1.37.0 │ 26 Sep 25 22:32 UTC │ 26 Sep 25 22:32 UTC │
	│ addons  │ addons-341571 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-341571        │ jenkins │ v1.37.0 │ 26 Sep 25 22:32 UTC │ 26 Sep 25 22:32 UTC │
	│ addons  │ addons-341571 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-341571        │ jenkins │ v1.37.0 │ 26 Sep 25 22:32 UTC │ 26 Sep 25 22:32 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-341571                                                                                                                                                                                                                                                                                                                                                                                           │ addons-341571        │ jenkins │ v1.37.0 │ 26 Sep 25 22:32 UTC │ 26 Sep 25 22:32 UTC │
	│ addons  │ addons-341571 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-341571        │ jenkins │ v1.37.0 │ 26 Sep 25 22:32 UTC │ 26 Sep 25 22:32 UTC │
	│ addons  │ addons-341571 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-341571        │ jenkins │ v1.37.0 │ 26 Sep 25 22:33 UTC │ 26 Sep 25 22:33 UTC │
	│ ssh     │ addons-341571 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-341571        │ jenkins │ v1.37.0 │ 26 Sep 25 22:33 UTC │                     │
	│ addons  │ addons-341571 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-341571        │ jenkins │ v1.37.0 │ 26 Sep 25 22:33 UTC │ 26 Sep 25 22:33 UTC │
	│ addons  │ addons-341571 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-341571        │ jenkins │ v1.37.0 │ 26 Sep 25 22:33 UTC │ 26 Sep 25 22:33 UTC │
	│ addons  │ addons-341571 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-341571        │ jenkins │ v1.37.0 │ 26 Sep 25 22:33 UTC │ 26 Sep 25 22:33 UTC │
	│ addons  │ addons-341571 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-341571        │ jenkins │ v1.37.0 │ 26 Sep 25 22:33 UTC │ 26 Sep 25 22:33 UTC │
	│ ip      │ addons-341571 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-341571        │ jenkins │ v1.37.0 │ 26 Sep 25 22:35 UTC │ 26 Sep 25 22:35 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/26 22:29:34
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 22:29:34.507051  213442 out.go:360] Setting OutFile to fd 1 ...
	I0926 22:29:34.507186  213442 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:29:34.507195  213442 out.go:374] Setting ErrFile to fd 2...
	I0926 22:29:34.507200  213442 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:29:34.507402  213442 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-208519/.minikube/bin
	I0926 22:29:34.507972  213442 out.go:368] Setting JSON to false
	I0926 22:29:34.508802  213442 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":7923,"bootTime":1758917851,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 22:29:34.508894  213442 start.go:140] virtualization: kvm guest
	I0926 22:29:34.510940  213442 out.go:179] * [addons-341571] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0926 22:29:34.512231  213442 notify.go:220] Checking for updates...
	I0926 22:29:34.512237  213442 out.go:179]   - MINIKUBE_LOCATION=21642
	I0926 22:29:34.513637  213442 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 22:29:34.515246  213442 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21642-208519/kubeconfig
	I0926 22:29:34.516638  213442 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-208519/.minikube
	I0926 22:29:34.517830  213442 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0926 22:29:34.518904  213442 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 22:29:34.520291  213442 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 22:29:34.542658  213442 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0926 22:29:34.542770  213442 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 22:29:34.598526  213442 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-26 22:29:34.589429525 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 22:29:34.598639  213442 docker.go:318] overlay module found
	I0926 22:29:34.600454  213442 out.go:179] * Using the docker driver based on user configuration
	I0926 22:29:34.601704  213442 start.go:304] selected driver: docker
	I0926 22:29:34.601719  213442 start.go:924] validating driver "docker" against <nil>
	I0926 22:29:34.601731  213442 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 22:29:34.602311  213442 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 22:29:34.660960  213442 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-26 22:29:34.65047464 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 22:29:34.661194  213442 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0926 22:29:34.661423  213442 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 22:29:34.663062  213442 out.go:179] * Using Docker driver with root privileges
	I0926 22:29:34.664176  213442 cni.go:84] Creating CNI manager for ""
	I0926 22:29:34.664270  213442 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0926 22:29:34.664284  213442 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0926 22:29:34.664349  213442 start.go:348] cluster config:
	{Name:addons-341571 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-341571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}

                                                
                                                
	I0926 22:29:34.665698  213442 out.go:179] * Starting "addons-341571" primary control-plane node in "addons-341571" cluster
	I0926 22:29:34.666668  213442 cache.go:123] Beginning downloading kic base image for docker with crio
	I0926 22:29:34.667695  213442 out.go:179] * Pulling base image v0.0.48 ...
	I0926 22:29:34.668714  213442 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0926 22:29:34.668755  213442 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21642-208519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0926 22:29:34.668764  213442 cache.go:58] Caching tarball of preloaded images
	I0926 22:29:34.668849  213442 preload.go:172] Found /home/jenkins/minikube-integration/21642-208519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0926 22:29:34.668860  213442 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0926 22:29:34.668848  213442 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0926 22:29:34.669187  213442 profile.go:143] Saving config to /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/config.json ...
	I0926 22:29:34.669212  213442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/config.json: {Name:mk3f43b8f341af3b23f7f83b3a205d38f25b358e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:34.685481  213442 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0926 22:29:34.685635  213442 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0926 22:29:34.685656  213442 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0926 22:29:34.685662  213442 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0926 22:29:34.685672  213442 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0926 22:29:34.685679  213442 cache.go:165] Loading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from local cache
	I0926 22:29:46.968278  213442 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from cached tarball
	I0926 22:29:46.968339  213442 cache.go:232] Successfully downloaded all kic artifacts
	I0926 22:29:46.968395  213442 start.go:360] acquireMachinesLock for addons-341571: {Name:mk79644fe0aed066c01fa86d730d1c14982ceafd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 22:29:46.968492  213442 start.go:364] duration metric: took 75.146µs to acquireMachinesLock for "addons-341571"
	I0926 22:29:46.968518  213442 start.go:93] Provisioning new machine with config: &{Name:addons-341571 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-341571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0926 22:29:46.968596  213442 start.go:125] createHost starting for "" (driver="docker")
	I0926 22:29:46.971559  213442 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0926 22:29:46.971823  213442 start.go:159] libmachine.API.Create for "addons-341571" (driver="docker")
	I0926 22:29:46.971874  213442 client.go:168] LocalClient.Create starting
	I0926 22:29:46.971979  213442 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21642-208519/.minikube/certs/ca.pem
	I0926 22:29:47.015801  213442 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21642-208519/.minikube/certs/cert.pem
	I0926 22:29:47.165950  213442 cli_runner.go:164] Run: docker network inspect addons-341571 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0926 22:29:47.182807  213442 cli_runner.go:211] docker network inspect addons-341571 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0926 22:29:47.182883  213442 network_create.go:284] running [docker network inspect addons-341571] to gather additional debugging logs...
	I0926 22:29:47.182908  213442 cli_runner.go:164] Run: docker network inspect addons-341571
	W0926 22:29:47.200017  213442 cli_runner.go:211] docker network inspect addons-341571 returned with exit code 1
	I0926 22:29:47.200056  213442 network_create.go:287] error running [docker network inspect addons-341571]: docker network inspect addons-341571: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-341571 not found
	I0926 22:29:47.200098  213442 network_create.go:289] output of [docker network inspect addons-341571]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-341571 not found
	
	** /stderr **
	I0926 22:29:47.200214  213442 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0926 22:29:47.219049  213442 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c7d490}
	I0926 22:29:47.219117  213442 network_create.go:124] attempt to create docker network addons-341571 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0926 22:29:47.219179  213442 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-341571 addons-341571
	I0926 22:29:47.340786  213442 network_create.go:108] docker network addons-341571 192.168.49.0/24 created
	I0926 22:29:47.340821  213442 kic.go:121] calculated static IP "192.168.49.2" for the "addons-341571" container
	I0926 22:29:47.340927  213442 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0926 22:29:47.358455  213442 cli_runner.go:164] Run: docker volume create addons-341571 --label name.minikube.sigs.k8s.io=addons-341571 --label created_by.minikube.sigs.k8s.io=true
	I0926 22:29:47.459844  213442 oci.go:103] Successfully created a docker volume addons-341571
	I0926 22:29:47.459937  213442 cli_runner.go:164] Run: docker run --rm --name addons-341571-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-341571 --entrypoint /usr/bin/test -v addons-341571:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0926 22:29:50.097875  213442 cli_runner.go:217] Completed: docker run --rm --name addons-341571-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-341571 --entrypoint /usr/bin/test -v addons-341571:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib: (2.637883726s)
	I0926 22:29:50.097903  213442 oci.go:107] Successfully prepared a docker volume addons-341571
	I0926 22:29:50.097936  213442 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0926 22:29:50.097960  213442 kic.go:194] Starting extracting preloaded images to volume ...
	I0926 22:29:50.098012  213442 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21642-208519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-341571:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0926 22:29:54.242907  213442 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21642-208519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-341571:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.144834172s)
	I0926 22:29:54.242967  213442 kic.go:203] duration metric: took 4.144992325s to extract preloaded images to volume ...
	W0926 22:29:54.243142  213442 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0926 22:29:54.243187  213442 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0926 22:29:54.243229  213442 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0926 22:29:54.299467  213442 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-341571 --name addons-341571 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-341571 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-341571 --network addons-341571 --ip 192.168.49.2 --volume addons-341571:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0926 22:29:54.600196  213442 cli_runner.go:164] Run: docker container inspect addons-341571 --format={{.State.Running}}
	I0926 22:29:54.618759  213442 cli_runner.go:164] Run: docker container inspect addons-341571 --format={{.State.Status}}
	I0926 22:29:54.635729  213442 cli_runner.go:164] Run: docker exec addons-341571 stat /var/lib/dpkg/alternatives/iptables
	I0926 22:29:54.682892  213442 oci.go:144] the created container "addons-341571" has a running status.
	I0926 22:29:54.682929  213442 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21642-208519/.minikube/machines/addons-341571/id_rsa...
	I0926 22:29:55.090849  213442 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21642-208519/.minikube/machines/addons-341571/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0926 22:29:55.117386  213442 cli_runner.go:164] Run: docker container inspect addons-341571 --format={{.State.Status}}
	I0926 22:29:55.135311  213442 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0926 22:29:55.135334  213442 kic_runner.go:114] Args: [docker exec --privileged addons-341571 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0926 22:29:55.181002  213442 cli_runner.go:164] Run: docker container inspect addons-341571 --format={{.State.Status}}
	I0926 22:29:55.198360  213442 machine.go:93] provisionDockerMachine start ...
	I0926 22:29:55.198459  213442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341571
	I0926 22:29:55.215416  213442 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:55.215645  213442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0926 22:29:55.215658  213442 main.go:141] libmachine: About to run SSH command:
	hostname
	I0926 22:29:55.352267  213442 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-341571
	
	I0926 22:29:55.352298  213442 ubuntu.go:182] provisioning hostname "addons-341571"
	I0926 22:29:55.352353  213442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341571
	I0926 22:29:55.369984  213442 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:55.370230  213442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0926 22:29:55.370247  213442 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-341571 && echo "addons-341571" | sudo tee /etc/hostname
	I0926 22:29:55.518367  213442 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-341571
	
	I0926 22:29:55.518444  213442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341571
	I0926 22:29:55.536131  213442 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:55.536342  213442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0926 22:29:55.536358  213442 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-341571' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-341571/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-341571' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0926 22:29:55.671881  213442 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 22:29:55.671910  213442 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21642-208519/.minikube CaCertPath:/home/jenkins/minikube-integration/21642-208519/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21642-208519/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21642-208519/.minikube}
	I0926 22:29:55.671952  213442 ubuntu.go:190] setting up certificates
	I0926 22:29:55.671964  213442 provision.go:84] configureAuth start
	I0926 22:29:55.672018  213442 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-341571
	I0926 22:29:55.689113  213442 provision.go:143] copyHostCerts
	I0926 22:29:55.689184  213442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-208519/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21642-208519/.minikube/ca.pem (1078 bytes)
	I0926 22:29:55.689314  213442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-208519/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21642-208519/.minikube/cert.pem (1123 bytes)
	I0926 22:29:55.689407  213442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-208519/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21642-208519/.minikube/key.pem (1675 bytes)
	I0926 22:29:55.689483  213442 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21642-208519/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21642-208519/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21642-208519/.minikube/certs/ca-key.pem org=jenkins.addons-341571 san=[127.0.0.1 192.168.49.2 addons-341571 localhost minikube]
	I0926 22:29:55.795210  213442 provision.go:177] copyRemoteCerts
	I0926 22:29:55.795301  213442 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 22:29:55.795365  213442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341571
	I0926 22:29:55.812636  213442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/addons-341571/id_rsa Username:docker}
	I0926 22:29:55.910041  213442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0926 22:29:55.937146  213442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0926 22:29:55.962196  213442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0926 22:29:55.986246  213442 provision.go:87] duration metric: took 314.265362ms to configureAuth
	I0926 22:29:55.986279  213442 ubuntu.go:206] setting minikube options for container-runtime
	I0926 22:29:55.986470  213442 config.go:182] Loaded profile config "addons-341571": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 22:29:55.986595  213442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341571
	I0926 22:29:56.003898  213442 main.go:141] libmachine: Using SSH client type: native
	I0926 22:29:56.004130  213442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0926 22:29:56.004150  213442 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0926 22:29:56.242275  213442 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0926 22:29:56.242303  213442 machine.go:96] duration metric: took 1.043918789s to provisionDockerMachine
	I0926 22:29:56.242316  213442 client.go:171] duration metric: took 9.270433839s to LocalClient.Create
	I0926 22:29:56.242350  213442 start.go:167] duration metric: took 9.270527983s to libmachine.API.Create "addons-341571"
	I0926 22:29:56.242364  213442 start.go:293] postStartSetup for "addons-341571" (driver="docker")
	I0926 22:29:56.242379  213442 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 22:29:56.242508  213442 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 22:29:56.242567  213442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341571
	I0926 22:29:56.259884  213442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/addons-341571/id_rsa Username:docker}
	I0926 22:29:56.359069  213442 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 22:29:56.362643  213442 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0926 22:29:56.362676  213442 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0926 22:29:56.362684  213442 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0926 22:29:56.362691  213442 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0926 22:29:56.362703  213442 filesync.go:126] Scanning /home/jenkins/minikube-integration/21642-208519/.minikube/addons for local assets ...
	I0926 22:29:56.362763  213442 filesync.go:126] Scanning /home/jenkins/minikube-integration/21642-208519/.minikube/files for local assets ...
	I0926 22:29:56.362787  213442 start.go:296] duration metric: took 120.416147ms for postStartSetup
	I0926 22:29:56.363127  213442 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-341571
	I0926 22:29:56.380524  213442 profile.go:143] Saving config to /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/config.json ...
	I0926 22:29:56.380778  213442 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0926 22:29:56.380823  213442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341571
	I0926 22:29:56.397620  213442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/addons-341571/id_rsa Username:docker}
	I0926 22:29:56.490288  213442 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0926 22:29:56.494674  213442 start.go:128] duration metric: took 9.526061746s to createHost
	I0926 22:29:56.494701  213442 start.go:83] releasing machines lock for "addons-341571", held for 9.526196048s
	I0926 22:29:56.494784  213442 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-341571
	I0926 22:29:56.511750  213442 ssh_runner.go:195] Run: cat /version.json
	I0926 22:29:56.511796  213442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341571
	I0926 22:29:56.511837  213442 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0926 22:29:56.511914  213442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341571
	I0926 22:29:56.529989  213442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/addons-341571/id_rsa Username:docker}
	I0926 22:29:56.530707  213442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/addons-341571/id_rsa Username:docker}
	I0926 22:29:56.694824  213442 ssh_runner.go:195] Run: systemctl --version
	I0926 22:29:56.699636  213442 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0926 22:29:56.840959  213442 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0926 22:29:56.846345  213442 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 22:29:56.870372  213442 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0926 22:29:56.870461  213442 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 22:29:56.901111  213442 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0926 22:29:56.901143  213442 start.go:495] detecting cgroup driver to use...
	I0926 22:29:56.901183  213442 detect.go:190] detected "systemd" cgroup driver on host os
	I0926 22:29:56.901229  213442 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0926 22:29:56.917245  213442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 22:29:56.929428  213442 docker.go:218] disabling cri-docker service (if available) ...
	I0926 22:29:56.929485  213442 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0926 22:29:56.943607  213442 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0926 22:29:56.958457  213442 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0926 22:29:57.028188  213442 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0926 22:29:57.101774  213442 docker.go:234] disabling docker service ...
	I0926 22:29:57.101842  213442 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0926 22:29:57.121946  213442 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0926 22:29:57.133720  213442 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0926 22:29:57.204180  213442 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0926 22:29:57.317478  213442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0926 22:29:57.329915  213442 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 22:29:57.347111  213442 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0926 22:29:57.347176  213442 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 22:29:57.359630  213442 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0926 22:29:57.359694  213442 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 22:29:57.369703  213442 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 22:29:57.379458  213442 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 22:29:57.389108  213442 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 22:29:57.398283  213442 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 22:29:57.408119  213442 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 22:29:57.424232  213442 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 22:29:57.434110  213442 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 22:29:57.442386  213442 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0926 22:29:57.442431  213442 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0926 22:29:57.455862  213442 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 22:29:57.464739  213442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 22:29:57.573289  213442 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0926 22:29:57.673211  213442 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0926 22:29:57.673290  213442 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0926 22:29:57.677066  213442 start.go:563] Will wait 60s for crictl version
	I0926 22:29:57.677146  213442 ssh_runner.go:195] Run: which crictl
	I0926 22:29:57.680518  213442 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0926 22:29:57.716511  213442 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0926 22:29:57.716627  213442 ssh_runner.go:195] Run: crio --version
	I0926 22:29:57.753152  213442 ssh_runner.go:195] Run: crio --version
	I0926 22:29:57.791668  213442 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0926 22:29:57.792880  213442 cli_runner.go:164] Run: docker network inspect addons-341571 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0926 22:29:57.809931  213442 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0926 22:29:57.813983  213442 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 22:29:57.825693  213442 kubeadm.go:883] updating cluster {Name:addons-341571 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-341571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVM
netPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0926 22:29:57.825836  213442 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0926 22:29:57.825889  213442 ssh_runner.go:195] Run: sudo crictl images --output json
	I0926 22:29:57.895699  213442 crio.go:514] all images are preloaded for cri-o runtime.
	I0926 22:29:57.895725  213442 crio.go:433] Images already preloaded, skipping extraction
	I0926 22:29:57.895770  213442 ssh_runner.go:195] Run: sudo crictl images --output json
	I0926 22:29:57.932362  213442 crio.go:514] all images are preloaded for cri-o runtime.
	I0926 22:29:57.932384  213442 cache_images.go:85] Images are preloaded, skipping loading
	I0926 22:29:57.932393  213442 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0926 22:29:57.932497  213442 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-341571 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-341571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0926 22:29:57.932570  213442 ssh_runner.go:195] Run: crio config
	I0926 22:29:57.975249  213442 cni.go:84] Creating CNI manager for ""
	I0926 22:29:57.975277  213442 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0926 22:29:57.975297  213442 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0926 22:29:57.975319  213442 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-341571 NodeName:addons-341571 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0926 22:29:57.975433  213442 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-341571"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0926 22:29:57.975495  213442 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0926 22:29:57.985495  213442 binaries.go:44] Found k8s binaries, skipping transfer
	I0926 22:29:57.985568  213442 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0926 22:29:57.994798  213442 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0926 22:29:58.013028  213442 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0926 22:29:58.033788  213442 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0926 22:29:58.052270  213442 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0926 22:29:58.055981  213442 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 22:29:58.067053  213442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 22:29:58.131814  213442 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 22:29:58.156498  213442 certs.go:69] Setting up /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571 for IP: 192.168.49.2
	I0926 22:29:58.156522  213442 certs.go:195] generating shared ca certs ...
	I0926 22:29:58.156542  213442 certs.go:227] acquiring lock for ca certs: {Name:mk7fa2bdff33a744d301294affc1d74bea26e4b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:58.156691  213442 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21642-208519/.minikube/ca.key
	I0926 22:29:58.245187  213442 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-208519/.minikube/ca.crt ...
	I0926 22:29:58.245219  213442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-208519/.minikube/ca.crt: {Name:mkaee5099d5d977adc480cf19eb339ca4e5128af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:58.245396  213442 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-208519/.minikube/ca.key ...
	I0926 22:29:58.245408  213442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-208519/.minikube/ca.key: {Name:mk444ae3db9e67e1157d93278ed7e6a6ace42aff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:58.245482  213442 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21642-208519/.minikube/proxy-client-ca.key
	I0926 22:29:58.407833  213442 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-208519/.minikube/proxy-client-ca.crt ...
	I0926 22:29:58.407862  213442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-208519/.minikube/proxy-client-ca.crt: {Name:mk420a90e62de2b7b0c67b03e2006d6c0f06ef70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:58.408747  213442 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-208519/.minikube/proxy-client-ca.key ...
	I0926 22:29:58.408762  213442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-208519/.minikube/proxy-client-ca.key: {Name:mkb1e06828f37c8b498e1108b5f09708c259c8c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:58.409370  213442 certs.go:257] generating profile certs ...
	I0926 22:29:58.409441  213442 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/client.key
	I0926 22:29:58.409456  213442 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/client.crt with IP's: []
	I0926 22:29:58.695848  213442 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/client.crt ...
	I0926 22:29:58.695881  213442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/client.crt: {Name:mk2513e0c906eb015043bda901723e67dc05fa6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:58.696046  213442 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/client.key ...
	I0926 22:29:58.696058  213442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/client.key: {Name:mk443cc8533e5709386df32badc180c970563dae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:58.696827  213442 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/apiserver.key.83ad1d2b
	I0926 22:29:58.696849  213442 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/apiserver.crt.83ad1d2b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0926 22:29:58.882177  213442 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/apiserver.crt.83ad1d2b ...
	I0926 22:29:58.882209  213442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/apiserver.crt.83ad1d2b: {Name:mka5940a2a4ea03ec9f91c8f978cdedf3eb714a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:58.882365  213442 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/apiserver.key.83ad1d2b ...
	I0926 22:29:58.882379  213442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/apiserver.key.83ad1d2b: {Name:mk6c65b269b8b4e8f1945b712b93d28123581752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:58.882445  213442 certs.go:382] copying /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/apiserver.crt.83ad1d2b -> /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/apiserver.crt
	I0926 22:29:58.882543  213442 certs.go:386] copying /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/apiserver.key.83ad1d2b -> /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/apiserver.key
	I0926 22:29:58.882600  213442 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/proxy-client.key
	I0926 22:29:58.882621  213442 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/proxy-client.crt with IP's: []
	I0926 22:29:59.044246  213442 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/proxy-client.crt ...
	I0926 22:29:59.044279  213442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/proxy-client.crt: {Name:mk41865b2a4649ab410811564be260cdc08f313f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:59.044450  213442 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/proxy-client.key ...
	I0926 22:29:59.044465  213442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/proxy-client.key: {Name:mka2c44377db46ff79f44c8dee3d5464a78a9874 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:29:59.044643  213442 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-208519/.minikube/certs/ca-key.pem (1679 bytes)
	I0926 22:29:59.044679  213442 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-208519/.minikube/certs/ca.pem (1078 bytes)
	I0926 22:29:59.044709  213442 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-208519/.minikube/certs/cert.pem (1123 bytes)
	I0926 22:29:59.044730  213442 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-208519/.minikube/certs/key.pem (1675 bytes)
	I0926 22:29:59.045439  213442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0926 22:29:59.071485  213442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0926 22:29:59.095230  213442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0926 22:29:59.119195  213442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0926 22:29:59.142740  213442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0926 22:29:59.166686  213442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0926 22:29:59.190304  213442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0926 22:29:59.214614  213442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0926 22:29:59.239975  213442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0926 22:29:59.268097  213442 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0926 22:29:59.286551  213442 ssh_runner.go:195] Run: openssl version
	I0926 22:29:59.292080  213442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0926 22:29:59.304075  213442 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0926 22:29:59.307831  213442 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 26 22:29 /usr/share/ca-certificates/minikubeCA.pem
	I0926 22:29:59.307884  213442 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0926 22:29:59.314611  213442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0926 22:29:59.323756  213442 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0926 22:29:59.327006  213442 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0926 22:29:59.327063  213442 kubeadm.go:400] StartCluster: {Name:addons-341571 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-341571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnet
Path: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 22:29:59.327166  213442 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0926 22:29:59.327238  213442 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0926 22:29:59.362731  213442 cri.go:89] found id: ""
	I0926 22:29:59.362802  213442 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0926 22:29:59.372188  213442 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0926 22:29:59.381243  213442 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0926 22:29:59.381303  213442 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0926 22:29:59.389866  213442 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0926 22:29:59.389881  213442 kubeadm.go:157] found existing configuration files:
	
	I0926 22:29:59.389916  213442 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0926 22:29:59.398240  213442 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0926 22:29:59.398288  213442 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0926 22:29:59.406842  213442 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0926 22:29:59.415211  213442 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0926 22:29:59.415271  213442 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0926 22:29:59.423837  213442 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0926 22:29:59.432641  213442 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0926 22:29:59.432697  213442 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0926 22:29:59.440881  213442 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0926 22:29:59.449594  213442 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0926 22:29:59.449647  213442 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0926 22:29:59.458256  213442 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0926 22:29:59.496782  213442 kubeadm.go:318] [init] Using Kubernetes version: v1.34.0
	I0926 22:29:59.496868  213442 kubeadm.go:318] [preflight] Running pre-flight checks
	I0926 22:29:59.512298  213442 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I0926 22:29:59.512387  213442 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1040-gcp
	I0926 22:29:59.512435  213442 kubeadm.go:318] OS: Linux
	I0926 22:29:59.512495  213442 kubeadm.go:318] CGROUPS_CPU: enabled
	I0926 22:29:59.512574  213442 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I0926 22:29:59.512645  213442 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I0926 22:29:59.512714  213442 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I0926 22:29:59.512797  213442 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I0926 22:29:59.512856  213442 kubeadm.go:318] CGROUPS_PIDS: enabled
	I0926 22:29:59.512898  213442 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I0926 22:29:59.512964  213442 kubeadm.go:318] CGROUPS_IO: enabled
	I0926 22:29:59.564496  213442 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0926 22:29:59.564633  213442 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0926 22:29:59.564797  213442 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0926 22:29:59.572268  213442 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0926 22:29:59.574182  213442 out.go:252]   - Generating certificates and keys ...
	I0926 22:29:59.574275  213442 kubeadm.go:318] [certs] Using existing ca certificate authority
	I0926 22:29:59.574384  213442 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I0926 22:29:59.689238  213442 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0926 22:29:59.772917  213442 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I0926 22:29:59.838841  213442 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I0926 22:29:59.953506  213442 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I0926 22:30:00.174123  213442 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I0926 22:30:00.174274  213442 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-341571 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0926 22:30:00.411788  213442 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I0926 22:30:00.412075  213442 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-341571 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0926 22:30:00.451616  213442 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0926 22:30:00.590391  213442 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I0926 22:30:01.130013  213442 kubeadm.go:318] [certs] Generating "sa" key and public key
	I0926 22:30:01.130202  213442 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0926 22:30:01.207158  213442 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0926 22:30:01.516853  213442 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0926 22:30:01.608620  213442 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0926 22:30:02.161572  213442 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0926 22:30:02.496845  213442 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0926 22:30:02.497256  213442 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0926 22:30:02.501068  213442 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0926 22:30:02.502507  213442 out.go:252]   - Booting up control plane ...
	I0926 22:30:02.502617  213442 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0926 22:30:02.502719  213442 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0926 22:30:02.503497  213442 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0926 22:30:02.512796  213442 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0926 22:30:02.512930  213442 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0926 22:30:02.518847  213442 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0926 22:30:02.519175  213442 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0926 22:30:02.519244  213442 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I0926 22:30:02.596684  213442 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0926 22:30:02.596832  213442 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0926 22:30:03.597701  213442 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.00113081s
	I0926 22:30:03.600590  213442 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0926 22:30:03.600728  213442 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0926 22:30:03.600877  213442 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0926 22:30:03.601009  213442 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0926 22:30:05.158179  213442 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.557407585s
	I0926 22:30:05.889205  213442 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.288554213s
	I0926 22:30:07.602812  213442 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.002095258s
	I0926 22:30:07.615331  213442 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0926 22:30:07.629019  213442 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0926 22:30:07.638436  213442 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I0926 22:30:07.638774  213442 kubeadm.go:318] [mark-control-plane] Marking the node addons-341571 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0926 22:30:07.646535  213442 kubeadm.go:318] [bootstrap-token] Using token: 30x8aq.wpbokj4xfzbmr6gl
	I0926 22:30:07.648050  213442 out.go:252]   - Configuring RBAC rules ...
	I0926 22:30:07.648205  213442 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0926 22:30:07.651410  213442 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0926 22:30:07.656788  213442 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0926 22:30:07.659112  213442 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0926 22:30:07.661514  213442 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0926 22:30:07.664952  213442 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0926 22:30:08.009242  213442 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0926 22:30:08.424427  213442 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I0926 22:30:09.008525  213442 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I0926 22:30:09.009308  213442 kubeadm.go:318] 
	I0926 22:30:09.009413  213442 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I0926 22:30:09.009422  213442 kubeadm.go:318] 
	I0926 22:30:09.009541  213442 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I0926 22:30:09.009563  213442 kubeadm.go:318] 
	I0926 22:30:09.009607  213442 kubeadm.go:318]   mkdir -p $HOME/.kube
	I0926 22:30:09.009662  213442 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0926 22:30:09.009715  213442 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0926 22:30:09.009723  213442 kubeadm.go:318] 
	I0926 22:30:09.009771  213442 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I0926 22:30:09.009778  213442 kubeadm.go:318] 
	I0926 22:30:09.009825  213442 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0926 22:30:09.009840  213442 kubeadm.go:318] 
	I0926 22:30:09.009925  213442 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I0926 22:30:09.010030  213442 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0926 22:30:09.010151  213442 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0926 22:30:09.010160  213442 kubeadm.go:318] 
	I0926 22:30:09.010229  213442 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I0926 22:30:09.010291  213442 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I0926 22:30:09.010297  213442 kubeadm.go:318] 
	I0926 22:30:09.010362  213442 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 30x8aq.wpbokj4xfzbmr6gl \
	I0926 22:30:09.010502  213442 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3d6fdc47b9a05c23d4386afe181df704d515c8f0f79bc04c09ac3ae58668e55b \
	I0926 22:30:09.010536  213442 kubeadm.go:318] 	--control-plane 
	I0926 22:30:09.010550  213442 kubeadm.go:318] 
	I0926 22:30:09.010678  213442 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I0926 22:30:09.010687  213442 kubeadm.go:318] 
	I0926 22:30:09.010807  213442 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 30x8aq.wpbokj4xfzbmr6gl \
	I0926 22:30:09.010933  213442 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3d6fdc47b9a05c23d4386afe181df704d515c8f0f79bc04c09ac3ae58668e55b 
	I0926 22:30:09.013776  213442 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1040-gcp\n", err: exit status 1
	I0926 22:30:09.013928  213442 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0926 22:30:09.013961  213442 cni.go:84] Creating CNI manager for ""
	I0926 22:30:09.013974  213442 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0926 22:30:09.015662  213442 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0926 22:30:09.016873  213442 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0926 22:30:09.021073  213442 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0926 22:30:09.021110  213442 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0926 22:30:09.039629  213442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0926 22:30:09.246472  213442 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0926 22:30:09.246564  213442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:30:09.246612  213442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-341571 minikube.k8s.io/updated_at=2025_09_26T22_30_09_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47 minikube.k8s.io/name=addons-341571 minikube.k8s.io/primary=true
	I0926 22:30:09.257356  213442 ops.go:34] apiserver oom_adj: -16
	I0926 22:30:09.334887  213442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:30:09.835173  213442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:30:10.335228  213442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:30:10.836026  213442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:30:11.335298  213442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:30:11.835263  213442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:30:12.335552  213442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:30:12.835489  213442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:30:13.335128  213442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:30:13.835228  213442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 22:30:13.901510  213442 kubeadm.go:1113] duration metric: took 4.655019136s to wait for elevateKubeSystemPrivileges
	I0926 22:30:13.901546  213442 kubeadm.go:402] duration metric: took 14.574489984s to StartCluster
	I0926 22:30:13.901568  213442 settings.go:142] acquiring lock: {Name:mk916931486ea7be0f55a69a0dcc9388c8f91bcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:30:13.902420  213442 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21642-208519/kubeconfig
	I0926 22:30:13.902902  213442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-208519/kubeconfig: {Name:mk573e8783a83da2d326620e120d75cc729311d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 22:30:13.903125  213442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0926 22:30:13.903137  213442 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0926 22:30:13.903199  213442 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0926 22:30:13.903361  213442 config.go:182] Loaded profile config "addons-341571": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 22:30:13.903374  213442 addons.go:69] Setting yakd=true in profile "addons-341571"
	I0926 22:30:13.903415  213442 addons.go:69] Setting inspektor-gadget=true in profile "addons-341571"
	I0926 22:30:13.903423  213442 addons.go:238] Setting addon yakd=true in "addons-341571"
	I0926 22:30:13.903429  213442 addons.go:238] Setting addon inspektor-gadget=true in "addons-341571"
	I0926 22:30:13.903458  213442 host.go:66] Checking if "addons-341571" exists ...
	I0926 22:30:13.903468  213442 addons.go:69] Setting metrics-server=true in profile "addons-341571"
	I0926 22:30:13.903480  213442 addons.go:238] Setting addon metrics-server=true in "addons-341571"
	I0926 22:30:13.903503  213442 host.go:66] Checking if "addons-341571" exists ...
	I0926 22:30:13.903538  213442 addons.go:69] Setting storage-provisioner=true in profile "addons-341571"
	I0926 22:30:13.903545  213442 addons.go:69] Setting volcano=true in profile "addons-341571"
	I0926 22:30:13.903582  213442 addons.go:69] Setting registry=true in profile "addons-341571"
	I0926 22:30:13.903587  213442 addons.go:69] Setting registry-creds=true in profile "addons-341571"
	I0926 22:30:13.903601  213442 addons.go:238] Setting addon registry=true in "addons-341571"
	I0926 22:30:13.903603  213442 addons.go:238] Setting addon registry-creds=true in "addons-341571"
	I0926 22:30:13.903582  213442 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-341571"
	I0926 22:30:13.903610  213442 addons.go:238] Setting addon volcano=true in "addons-341571"
	I0926 22:30:13.903624  213442 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-341571"
	I0926 22:30:13.903631  213442 host.go:66] Checking if "addons-341571" exists ...
	I0926 22:30:13.903664  213442 host.go:66] Checking if "addons-341571" exists ...
	I0926 22:30:13.903680  213442 host.go:66] Checking if "addons-341571" exists ...
	I0926 22:30:13.904015  213442 cli_runner.go:164] Run: docker container inspect addons-341571 --format={{.State.Status}}
	I0926 22:30:13.904114  213442 addons.go:69] Setting volumesnapshots=true in profile "addons-341571"
	I0926 22:30:13.904142  213442 cli_runner.go:164] Run: docker container inspect addons-341571 --format={{.State.Status}}
	I0926 22:30:13.904199  213442 addons.go:69] Setting cloud-spanner=true in profile "addons-341571"
	I0926 22:30:13.904213  213442 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-341571"
	I0926 22:30:13.904233  213442 addons.go:238] Setting addon cloud-spanner=true in "addons-341571"
	I0926 22:30:13.904255  213442 host.go:66] Checking if "addons-341571" exists ...
	I0926 22:30:13.904270  213442 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-341571"
	I0926 22:30:13.904292  213442 cli_runner.go:164] Run: docker container inspect addons-341571 --format={{.State.Status}}
	I0926 22:30:13.904307  213442 cli_runner.go:164] Run: docker container inspect addons-341571 --format={{.State.Status}}
	I0926 22:30:13.904315  213442 host.go:66] Checking if "addons-341571" exists ...
	I0926 22:30:13.904201  213442 cli_runner.go:164] Run: docker container inspect addons-341571 --format={{.State.Status}}
	I0926 22:30:13.905826  213442 out.go:179] * Verifying Kubernetes components...
	I0926 22:30:13.906143  213442 cli_runner.go:164] Run: docker container inspect addons-341571 --format={{.State.Status}}
	I0926 22:30:13.903579  213442 addons.go:238] Setting addon storage-provisioner=true in "addons-341571"
	I0926 22:30:13.906490  213442 host.go:66] Checking if "addons-341571" exists ...
	I0926 22:30:13.907388  213442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 22:30:13.904072  213442 cli_runner.go:164] Run: docker container inspect addons-341571 --format={{.State.Status}}
	I0926 22:30:13.907874  213442 addons.go:69] Setting ingress-dns=true in profile "addons-341571"
	I0926 22:30:13.907897  213442 addons.go:238] Setting addon ingress-dns=true in "addons-341571"
	I0926 22:30:13.907947  213442 host.go:66] Checking if "addons-341571" exists ...
	I0926 22:30:13.908497  213442 cli_runner.go:164] Run: docker container inspect addons-341571 --format={{.State.Status}}
	I0926 22:30:13.910297  213442 cli_runner.go:164] Run: docker container inspect addons-341571 --format={{.State.Status}}
	I0926 22:30:13.903459  213442 host.go:66] Checking if "addons-341571" exists ...
	I0926 22:30:13.904163  213442 addons.go:238] Setting addon volumesnapshots=true in "addons-341571"
	I0926 22:30:13.910601  213442 host.go:66] Checking if "addons-341571" exists ...
	I0926 22:30:13.911249  213442 cli_runner.go:164] Run: docker container inspect addons-341571 --format={{.State.Status}}
	I0926 22:30:13.911525  213442 cli_runner.go:164] Run: docker container inspect addons-341571 --format={{.State.Status}}
	I0926 22:30:13.911605  213442 addons.go:69] Setting gcp-auth=true in profile "addons-341571"
	I0926 22:30:13.911628  213442 mustload.go:65] Loading cluster: addons-341571
	I0926 22:30:13.904190  213442 addons.go:69] Setting default-storageclass=true in profile "addons-341571"
	I0926 22:30:13.912637  213442 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-341571"
	I0926 22:30:13.904195  213442 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-341571"
	I0926 22:30:13.914917  213442 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-341571"
	I0926 22:30:13.915069  213442 host.go:66] Checking if "addons-341571" exists ...
	I0926 22:30:13.915737  213442 cli_runner.go:164] Run: docker container inspect addons-341571 --format={{.State.Status}}
	I0926 22:30:13.904179  213442 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-341571"
	I0926 22:30:13.916762  213442 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-341571"
	I0926 22:30:13.916799  213442 host.go:66] Checking if "addons-341571" exists ...
	I0926 22:30:13.917026  213442 cli_runner.go:164] Run: docker container inspect addons-341571 --format={{.State.Status}}
	I0926 22:30:13.917064  213442 cli_runner.go:164] Run: docker container inspect addons-341571 --format={{.State.Status}}
	I0926 22:30:13.917456  213442 addons.go:69] Setting ingress=true in profile "addons-341571"
	I0926 22:30:13.917561  213442 addons.go:238] Setting addon ingress=true in "addons-341571"
	I0926 22:30:13.917621  213442 host.go:66] Checking if "addons-341571" exists ...
	I0926 22:30:13.923727  213442 cli_runner.go:164] Run: docker container inspect addons-341571 --format={{.State.Status}}
	I0926 22:30:13.924906  213442 cli_runner.go:164] Run: docker container inspect addons-341571 --format={{.State.Status}}
	I0926 22:30:13.925449  213442 config.go:182] Loaded profile config "addons-341571": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 22:30:13.925860  213442 cli_runner.go:164] Run: docker container inspect addons-341571 --format={{.State.Status}}
	I0926 22:30:13.949990  213442 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0926 22:30:13.951312  213442 out.go:179]   - Using image docker.io/registry:3.0.0
	I0926 22:30:13.952518  213442 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0926 22:30:13.952539  213442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0926 22:30:13.952605  213442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341571
	I0926 22:30:13.955815  213442 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-341571"
	I0926 22:30:13.955875  213442 host.go:66] Checking if "addons-341571" exists ...
	I0926 22:30:13.956413  213442 cli_runner.go:164] Run: docker container inspect addons-341571 --format={{.State.Status}}
	I0926 22:30:13.965571  213442 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0926 22:30:13.966888  213442 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0926 22:30:13.966912  213442 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0926 22:30:13.967002  213442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341571
	W0926 22:30:13.970935  213442 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0926 22:30:13.975866  213442 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 22:30:13.979963  213442 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 22:30:13.979989  213442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0926 22:30:13.980096  213442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341571
	I0926 22:30:14.002282  213442 host.go:66] Checking if "addons-341571" exists ...
	I0926 22:30:14.004663  213442 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0926 22:30:14.004750  213442 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0926 22:30:14.004783  213442 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0926 22:30:14.005876  213442 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0926 22:30:14.005902  213442 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0926 22:30:14.005981  213442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341571
	I0926 22:30:14.006285  213442 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0926 22:30:14.006300  213442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0926 22:30:14.006356  213442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341571
	I0926 22:30:14.007940  213442 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0926 22:30:14.009175  213442 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0926 22:30:14.009250  213442 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0926 22:30:14.011171  213442 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0926 22:30:14.011188  213442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0926 22:30:14.011237  213442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341571
	I0926 22:30:14.011396  213442 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0926 22:30:14.012506  213442 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0926 22:30:14.012537  213442 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0926 22:30:14.013919  213442 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0926 22:30:14.013920  213442 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0926 22:30:14.014313  213442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0926 22:30:14.014579  213442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341571
	I0926 22:30:14.015931  213442 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0926 22:30:14.021357  213442 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0926 22:30:14.021483  213442 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0926 22:30:14.022700  213442 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0926 22:30:14.022722  213442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0926 22:30:14.022814  213442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341571
	I0926 22:30:14.022820  213442 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0926 22:30:14.022883  213442 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0926 22:30:14.023063  213442 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0926 22:30:14.023913  213442 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0926 22:30:14.023985  213442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0926 22:30:14.024113  213442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341571
	I0926 22:30:14.025277  213442 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0926 22:30:14.025304  213442 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0926 22:30:14.024441  213442 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0926 22:30:14.025353  213442 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0926 22:30:14.025425  213442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341571
	I0926 22:30:14.025457  213442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341571
	I0926 22:30:14.029254  213442 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0926 22:30:14.032870  213442 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0926 22:30:14.032979  213442 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0926 22:30:14.035728  213442 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0926 22:30:14.035754  213442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0926 22:30:14.035831  213442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341571
	I0926 22:30:14.036041  213442 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0926 22:30:14.036056  213442 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0926 22:30:14.036321  213442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341571
	I0926 22:30:14.039693  213442 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0926 22:30:14.042220  213442 addons.go:238] Setting addon default-storageclass=true in "addons-341571"
	I0926 22:30:14.042267  213442 host.go:66] Checking if "addons-341571" exists ...
	I0926 22:30:14.042273  213442 out.go:179]   - Using image docker.io/busybox:stable
	I0926 22:30:14.042780  213442 cli_runner.go:164] Run: docker container inspect addons-341571 --format={{.State.Status}}
	I0926 22:30:14.043827  213442 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0926 22:30:14.043855  213442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0926 22:30:14.043911  213442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341571
	I0926 22:30:14.052001  213442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/addons-341571/id_rsa Username:docker}
	I0926 22:30:14.059129  213442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/addons-341571/id_rsa Username:docker}
	I0926 22:30:14.072164  213442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/addons-341571/id_rsa Username:docker}
	I0926 22:30:14.072804  213442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/addons-341571/id_rsa Username:docker}
	I0926 22:30:14.078719  213442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/addons-341571/id_rsa Username:docker}
	I0926 22:30:14.079254  213442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/addons-341571/id_rsa Username:docker}
	I0926 22:30:14.094224  213442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/addons-341571/id_rsa Username:docker}
	I0926 22:30:14.096305  213442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/addons-341571/id_rsa Username:docker}
	I0926 22:30:14.103324  213442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/addons-341571/id_rsa Username:docker}
	I0926 22:30:14.108188  213442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/addons-341571/id_rsa Username:docker}
	I0926 22:30:14.112723  213442 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0926 22:30:14.112742  213442 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0926 22:30:14.112802  213442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341571
	I0926 22:30:14.117877  213442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/addons-341571/id_rsa Username:docker}
	I0926 22:30:14.120178  213442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/addons-341571/id_rsa Username:docker}
	I0926 22:30:14.131040  213442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/addons-341571/id_rsa Username:docker}
	I0926 22:30:14.133644  213442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/addons-341571/id_rsa Username:docker}
	W0926 22:30:14.135433  213442 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0926 22:30:14.135476  213442 retry.go:31] will retry after 314.997622ms: ssh: handshake failed: EOF
	I0926 22:30:14.157619  213442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/addons-341571/id_rsa Username:docker}
	I0926 22:30:14.174520  213442 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 22:30:14.174828  213442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0926 22:30:14.248250  213442 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:14.248279  213442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0926 22:30:14.248389  213442 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0926 22:30:14.248400  213442 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0926 22:30:14.251852  213442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 22:30:14.263390  213442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0926 22:30:14.279147  213442 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0926 22:30:14.279174  213442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0926 22:30:14.290859  213442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0926 22:30:14.307655  213442 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0926 22:30:14.307685  213442 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0926 22:30:14.317248  213442 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0926 22:30:14.317278  213442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0926 22:30:14.323585  213442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0926 22:30:14.327817  213442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:14.333838  213442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0926 22:30:14.337063  213442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0926 22:30:14.342764  213442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0926 22:30:14.345591  213442 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0926 22:30:14.345616  213442 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0926 22:30:14.368651  213442 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0926 22:30:14.368682  213442 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0926 22:30:14.371645  213442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0926 22:30:14.374542  213442 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0926 22:30:14.374567  213442 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0926 22:30:14.395900  213442 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0926 22:30:14.395950  213442 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0926 22:30:14.398158  213442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0926 22:30:14.449852  213442 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0926 22:30:14.449896  213442 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0926 22:30:14.455478  213442 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0926 22:30:14.455503  213442 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0926 22:30:14.486457  213442 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0926 22:30:14.486530  213442 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0926 22:30:14.502051  213442 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0926 22:30:14.502144  213442 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0926 22:30:14.547634  213442 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0926 22:30:14.547667  213442 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0926 22:30:14.571761  213442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0926 22:30:14.595182  213442 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0926 22:30:14.595291  213442 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0926 22:30:14.615071  213442 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0926 22:30:14.615115  213442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0926 22:30:14.617253  213442 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0926 22:30:14.617274  213442 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0926 22:30:14.663221  213442 node_ready.go:35] waiting up to 6m0s for node "addons-341571" to be "Ready" ...
	I0926 22:30:14.663499  213442 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0926 22:30:14.691870  213442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0926 22:30:14.705748  213442 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0926 22:30:14.705777  213442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0926 22:30:14.717618  213442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0926 22:30:14.723381  213442 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0926 22:30:14.723410  213442 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0926 22:30:14.773640  213442 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0926 22:30:14.773671  213442 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0926 22:30:14.790039  213442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0926 22:30:14.875029  213442 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0926 22:30:14.875169  213442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0926 22:30:14.966187  213442 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0926 22:30:14.966219  213442 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0926 22:30:15.033597  213442 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0926 22:30:15.033627  213442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0926 22:30:15.072947  213442 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0926 22:30:15.072977  213442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0926 22:30:15.130370  213442 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0926 22:30:15.130409  213442 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0926 22:30:15.173926  213442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0926 22:30:15.190636  213442 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-341571" context rescaled to 1 replicas
	W0926 22:30:15.243809  213442 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:15.243853  213442 retry.go:31] will retry after 240.051916ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:15.484949  213442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:15.650835  213442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.308023675s)
	I0926 22:30:15.650896  213442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.279202918s)
	I0926 22:30:15.650977  213442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.252780878s)
	I0926 22:30:15.651004  213442 addons.go:479] Verifying addon registry=true in "addons-341571"
	I0926 22:30:15.651264  213442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.31416713s)
	I0926 22:30:15.651287  213442 addons.go:479] Verifying addon ingress=true in "addons-341571"
	I0926 22:30:15.651462  213442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.079656578s)
	I0926 22:30:15.651569  213442 addons.go:479] Verifying addon metrics-server=true in "addons-341571"
	I0926 22:30:15.652539  213442 out.go:179] * Verifying ingress addon...
	I0926 22:30:15.652553  213442 out.go:179] * Verifying registry addon...
	I0926 22:30:15.655319  213442 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0926 22:30:15.655600  213442 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0926 22:30:15.659033  213442 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0926 22:30:15.659057  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:15.660157  213442 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0926 22:30:15.660172  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:16.114051  213442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.422124125s)
	W0926 22:30:16.114150  213442 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0926 22:30:16.114153  213442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.396487388s)
	I0926 22:30:16.114179  213442 retry.go:31] will retry after 133.649211ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0926 22:30:16.114203  213442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.32413007s)
	I0926 22:30:16.114438  213442 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-341571"
	I0926 22:30:16.116036  213442 out.go:179] * Verifying csi-hostpath-driver addon...
	I0926 22:30:16.116036  213442 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-341571 service yakd-dashboard -n yakd-dashboard
	
	I0926 22:30:16.119242  213442 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0926 22:30:16.122127  213442 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0926 22:30:16.122145  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:16.222864  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:16.223042  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:16.248230  213442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W0926 22:30:16.256613  213442 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:16.256650  213442 retry.go:31] will retry after 326.584214ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:16.584418  213442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:16.623193  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:16.659231  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:16.659387  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0926 22:30:16.666707  213442 node_ready.go:57] node "addons-341571" has "Ready":"False" status (will retry)
	I0926 22:30:17.123224  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:17.158867  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:17.158908  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:17.623364  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:17.659284  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:17.659287  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:18.123228  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:18.159288  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:18.159557  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:18.622447  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:18.657855  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:18.658074  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:18.735659  213442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.487380818s)
	I0926 22:30:18.735746  213442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.151290945s)
	W0926 22:30:18.735788  213442 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:18.735805  213442 retry.go:31] will retry after 473.718301ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:19.121605  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:19.158234  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:19.158452  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0926 22:30:19.166157  213442 node_ready.go:57] node "addons-341571" has "Ready":"False" status (will retry)
	I0926 22:30:19.210032  213442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:19.622540  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:19.658777  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:19.659060  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0926 22:30:19.759309  213442 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:19.759346  213442 retry.go:31] will retry after 649.877ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:20.122983  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:20.158802  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:20.158925  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:20.410038  213442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:20.622685  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:20.658327  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:20.658495  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0926 22:30:20.958450  213442 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:20.958485  213442 retry.go:31] will retry after 1.587348324s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:21.122960  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:21.158636  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:21.158841  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:21.608893  213442 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0926 22:30:21.608983  213442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341571
	I0926 22:30:21.623654  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:21.626866  213442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/addons-341571/id_rsa Username:docker}
	I0926 22:30:21.658838  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:21.659073  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0926 22:30:21.666320  213442 node_ready.go:57] node "addons-341571" has "Ready":"False" status (will retry)
	I0926 22:30:21.735874  213442 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0926 22:30:21.756785  213442 addons.go:238] Setting addon gcp-auth=true in "addons-341571"
	I0926 22:30:21.756844  213442 host.go:66] Checking if "addons-341571" exists ...
	I0926 22:30:21.757255  213442 cli_runner.go:164] Run: docker container inspect addons-341571 --format={{.State.Status}}
	I0926 22:30:21.775275  213442 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0926 22:30:21.775331  213442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-341571
	I0926 22:30:21.793552  213442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/addons-341571/id_rsa Username:docker}
	I0926 22:30:21.887626  213442 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0926 22:30:21.888787  213442 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0926 22:30:21.889797  213442 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0926 22:30:21.889813  213442 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0926 22:30:21.910973  213442 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0926 22:30:21.911002  213442 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0926 22:30:21.931143  213442 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0926 22:30:21.931168  213442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0926 22:30:21.950222  213442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0926 22:30:22.122425  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:22.160227  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:22.160490  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:22.276585  213442 addons.go:479] Verifying addon gcp-auth=true in "addons-341571"
	I0926 22:30:22.277948  213442 out.go:179] * Verifying gcp-auth addon...
	I0926 22:30:22.279784  213442 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0926 22:30:22.282173  213442 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0926 22:30:22.282189  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:22.546561  213442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:22.622969  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:22.659104  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:22.659297  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:22.783507  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0926 22:30:23.104685  213442 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:23.104763  213442 retry.go:31] will retry after 2.534560759s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:23.122888  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:23.224136  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:23.224360  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:23.325233  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:23.622157  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:23.658815  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:23.658959  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:23.782834  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:24.123042  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:24.158741  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:24.158926  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0926 22:30:24.168073  213442 node_ready.go:57] node "addons-341571" has "Ready":"False" status (will retry)
	I0926 22:30:24.282986  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:24.622984  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:24.658679  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:24.658900  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:24.783111  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:25.122954  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:25.158631  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:25.158833  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:25.284081  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:25.623068  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:25.640142  213442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:25.659646  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:25.659866  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:25.782925  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:26.122833  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:26.158610  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:26.158836  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0926 22:30:26.186523  213442 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:26.186562  213442 retry.go:31] will retry after 3.861842707s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:26.283794  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:26.622856  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:26.658313  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:26.658527  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0926 22:30:26.665639  213442 node_ready.go:57] node "addons-341571" has "Ready":"False" status (will retry)
	I0926 22:30:26.783439  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:27.122560  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:27.158519  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:27.158604  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:27.283487  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:27.623235  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:27.659152  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:27.659336  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:27.783473  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:28.122327  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:28.159205  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:28.159321  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:28.283158  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:28.622868  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:28.658502  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:28.658711  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:28.783795  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:29.123192  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:29.159142  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:29.159199  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0926 22:30:29.166179  213442 node_ready.go:57] node "addons-341571" has "Ready":"False" status (will retry)
	I0926 22:30:29.283544  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:29.623168  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:29.659003  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:29.659071  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:29.783397  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:30.048744  213442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:30.123003  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:30.158941  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:30.159191  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:30.283220  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0926 22:30:30.594510  213442 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:30.594541  213442 retry.go:31] will retry after 3.107106811s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:30.622479  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:30.658322  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:30.658394  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:30.783513  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:31.122893  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:31.158417  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:31.158557  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:31.283594  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:31.622476  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:31.657952  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:31.658144  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0926 22:30:31.666468  213442 node_ready.go:57] node "addons-341571" has "Ready":"False" status (will retry)
	I0926 22:30:31.783179  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:32.122014  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:32.158930  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:32.159192  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:32.282917  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:32.623054  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:32.658484  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:32.658702  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:32.782489  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:33.122535  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:33.158684  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:33.158683  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:33.283746  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:33.622970  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:33.658886  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:33.658955  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:33.702375  213442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:33.782782  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:34.122702  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:34.159018  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:34.159080  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0926 22:30:34.165937  213442 node_ready.go:57] node "addons-341571" has "Ready":"False" status (will retry)
	W0926 22:30:34.258148  213442 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:34.258179  213442 retry.go:31] will retry after 3.562184569s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:34.283073  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:34.622296  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:34.659211  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:34.659331  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:34.783497  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:35.122416  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:35.158119  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:35.158321  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:35.284257  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:35.622192  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:35.658958  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:35.659173  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:35.783421  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:36.122450  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:36.158065  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:36.158311  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0926 22:30:36.166477  213442 node_ready.go:57] node "addons-341571" has "Ready":"False" status (will retry)
	I0926 22:30:36.283791  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:36.622952  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:36.658541  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:36.658684  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:36.782745  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:37.123370  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:37.159337  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:37.159527  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:37.283195  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:37.623815  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:37.658963  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:37.659000  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:37.782912  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:37.821026  213442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:38.122788  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:38.164754  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0926 22:30:38.167979  213442 node_ready.go:57] node "addons-341571" has "Ready":"False" status (will retry)
	I0926 22:30:38.168483  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:38.283020  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0926 22:30:38.392033  213442 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:38.392070  213442 retry.go:31] will retry after 13.310166657s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:38.622967  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:38.658873  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:38.659054  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:38.783122  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:39.122163  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:39.159337  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:39.159415  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:39.283950  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:39.622016  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:39.658970  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:39.659136  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:39.783608  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:40.122871  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:40.158869  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:40.159208  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:40.283174  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:40.622249  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:40.658853  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:40.658983  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0926 22:30:40.665853  213442 node_ready.go:57] node "addons-341571" has "Ready":"False" status (will retry)
	I0926 22:30:40.782983  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:41.123305  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:41.159026  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:41.159122  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:41.283178  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:41.622396  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:41.659212  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:41.659282  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:41.783214  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:42.122226  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:42.159147  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:42.159198  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:42.283233  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:42.622402  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:42.659175  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:42.659299  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0926 22:30:42.666369  213442 node_ready.go:57] node "addons-341571" has "Ready":"False" status (will retry)
	I0926 22:30:42.783390  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:43.122543  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:43.158300  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:43.158484  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:43.283754  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:43.622970  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:43.658751  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:43.658950  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:43.782935  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:44.122353  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:44.159948  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:44.160261  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:44.283782  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:44.623288  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:44.658906  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:44.659192  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:44.782913  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:45.122898  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:45.158501  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:45.158649  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0926 22:30:45.165616  213442 node_ready.go:57] node "addons-341571" has "Ready":"False" status (will retry)
	I0926 22:30:45.283821  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:45.622874  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:45.658472  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:45.658644  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:45.783537  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:46.122459  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:46.157886  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:46.158030  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:46.282428  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:46.622835  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:46.658330  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:46.658348  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:46.783117  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:47.122461  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:47.158143  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:47.158396  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0926 22:30:47.166524  213442 node_ready.go:57] node "addons-341571" has "Ready":"False" status (will retry)
	I0926 22:30:47.283674  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:47.623129  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:47.659096  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:47.659121  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:47.783192  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:48.122272  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:48.159386  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:48.159579  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:48.283819  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:48.622834  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:48.658634  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:48.658831  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:48.783653  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:49.123271  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:49.158991  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:49.159146  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:49.283071  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:49.621994  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:49.658637  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:49.658791  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0926 22:30:49.665769  213442 node_ready.go:57] node "addons-341571" has "Ready":"False" status (will retry)
	I0926 22:30:49.782723  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:50.122993  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:50.158769  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:50.158923  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:50.282525  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:50.622520  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:50.658327  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:50.658444  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:50.783729  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:51.122648  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:51.158395  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:51.158614  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:51.283721  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:51.622784  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:51.658433  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:51.658641  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:51.702794  213442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:30:51.782827  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:52.123164  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:52.158480  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:52.158684  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0926 22:30:52.166044  213442 node_ready.go:57] node "addons-341571" has "Ready":"False" status (will retry)
	W0926 22:30:52.256380  213442 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:52.256422  213442 retry.go:31] will retry after 14.725862992s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:30:52.283187  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:52.622173  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:52.658778  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:52.658796  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:52.783574  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:53.122626  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:53.158378  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:53.158535  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:53.283367  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:53.622082  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:53.659060  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:53.659077  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:53.782659  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:54.122869  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:54.159283  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:54.161502  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:54.283300  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:54.622566  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:54.658267  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:54.658430  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0926 22:30:54.666568  213442 node_ready.go:57] node "addons-341571" has "Ready":"False" status (will retry)
	I0926 22:30:54.783547  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:55.122810  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:55.158673  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:55.158874  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:55.283700  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:55.623499  213442 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0926 22:30:55.623531  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:55.666032  213442 node_ready.go:49] node "addons-341571" is "Ready"
	I0926 22:30:55.666064  213442 node_ready.go:38] duration metric: took 41.002786279s for node "addons-341571" to be "Ready" ...
	I0926 22:30:55.666081  213442 api_server.go:52] waiting for apiserver process to appear ...
	I0926 22:30:55.666156  213442 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 22:30:55.679593  213442 api_server.go:72] duration metric: took 41.776421937s to wait for apiserver process to appear ...
	I0926 22:30:55.679618  213442 api_server.go:88] waiting for apiserver healthz status ...
	I0926 22:30:55.679636  213442 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0926 22:30:55.684729  213442 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0926 22:30:55.685859  213442 api_server.go:141] control plane version: v1.34.0
	I0926 22:30:55.685888  213442 api_server.go:131] duration metric: took 6.262576ms to wait for apiserver health ...
	I0926 22:30:55.685900  213442 system_pods.go:43] waiting for kube-system pods to appear ...
	I0926 22:30:55.724658  213442 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0926 22:30:55.724682  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:55.724871  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:55.726040  213442 system_pods.go:59] 20 kube-system pods found
	I0926 22:30:55.726074  213442 system_pods.go:61] "amd-gpu-device-plugin-5w42z" [286d2d17-a9e5-4aed-8459-4513ce47bc96] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0926 22:30:55.726081  213442 system_pods.go:61] "coredns-66bc5c9577-6lgt2" [834c5a6c-dd34-425e-9aaf-cebe60b266f1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:30:55.726105  213442 system_pods.go:61] "csi-hostpath-attacher-0" [876d9d95-8c11-4574-bb90-e310553cad36] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0926 22:30:55.726113  213442 system_pods.go:61] "csi-hostpath-resizer-0" [a56bc47e-8278-4afc-92aa-aefa96c69f20] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0926 22:30:55.726122  213442 system_pods.go:61] "csi-hostpathplugin-thdxj" [7bd2ab76-e381-4037-bca4-75f52426f1fd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0926 22:30:55.726132  213442 system_pods.go:61] "etcd-addons-341571" [46828c2d-9348-47fe-8c4e-af8b24e30cc0] Running
	I0926 22:30:55.726136  213442 system_pods.go:61] "kindnet-qckxx" [4b7f8410-66c4-4e04-bff8-cc3beb2a4d42] Running
	I0926 22:30:55.726139  213442 system_pods.go:61] "kube-apiserver-addons-341571" [1f14818e-648d-477f-a099-33e60f4b2b9b] Running
	I0926 22:30:55.726143  213442 system_pods.go:61] "kube-controller-manager-addons-341571" [570402f2-bc85-43ec-8745-a69279070321] Running
	I0926 22:30:55.726148  213442 system_pods.go:61] "kube-ingress-dns-minikube" [bc1f29c4-b6c1-4daa-9fcd-50101584ff4a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0926 22:30:55.726152  213442 system_pods.go:61] "kube-proxy-qlkkx" [efd79ac0-eabc-4122-9f3f-f4a201bae1c4] Running
	I0926 22:30:55.726155  213442 system_pods.go:61] "kube-scheduler-addons-341571" [4b80f23a-724f-42e4-ad35-444631095085] Running
	I0926 22:30:55.726161  213442 system_pods.go:61] "metrics-server-85b7d694d7-t56ww" [6a75dcb7-123e-4f50-8ff9-89899540cfc4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0926 22:30:55.726170  213442 system_pods.go:61] "nvidia-device-plugin-daemonset-8j546" [471abb6b-3a33-47a4-96ae-a4fd74359b94] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0926 22:30:55.726176  213442 system_pods.go:61] "registry-66898fdd98-ctjkw" [8a0f3fc0-2d0c-464a-9a56-61049fbb2336] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0926 22:30:55.726181  213442 system_pods.go:61] "registry-creds-764b6fb674-kzkvl" [fc99e324-fce1-4919-a021-73f9da278ad2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0926 22:30:55.726186  213442 system_pods.go:61] "registry-proxy-tzv57" [d80fb770-8ba3-4d49-9f0e-c1525a809861] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0926 22:30:55.726194  213442 system_pods.go:61] "snapshot-controller-7d9fbc56b8-r8ldh" [fb361264-5548-46d4-970d-a4a33429ba2f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0926 22:30:55.726200  213442 system_pods.go:61] "snapshot-controller-7d9fbc56b8-s54k8" [ec65a6b7-a617-45ca-b19d-9ad6839c2289] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0926 22:30:55.726210  213442 system_pods.go:61] "storage-provisioner" [6a44aa85-9420-4cae-aa3a-fec9bb4baf43] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0926 22:30:55.726217  213442 system_pods.go:74] duration metric: took 40.308549ms to wait for pod list to return data ...
	I0926 22:30:55.726230  213442 default_sa.go:34] waiting for default service account to be created ...
	I0926 22:30:55.729035  213442 default_sa.go:45] found service account: "default"
	I0926 22:30:55.729059  213442 default_sa.go:55] duration metric: took 2.821666ms for default service account to be created ...
	I0926 22:30:55.729070  213442 system_pods.go:116] waiting for k8s-apps to be running ...
	I0926 22:30:55.733507  213442 system_pods.go:86] 20 kube-system pods found
	I0926 22:30:55.733540  213442 system_pods.go:89] "amd-gpu-device-plugin-5w42z" [286d2d17-a9e5-4aed-8459-4513ce47bc96] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0926 22:30:55.733554  213442 system_pods.go:89] "coredns-66bc5c9577-6lgt2" [834c5a6c-dd34-425e-9aaf-cebe60b266f1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:30:55.733565  213442 system_pods.go:89] "csi-hostpath-attacher-0" [876d9d95-8c11-4574-bb90-e310553cad36] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0926 22:30:55.733573  213442 system_pods.go:89] "csi-hostpath-resizer-0" [a56bc47e-8278-4afc-92aa-aefa96c69f20] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0926 22:30:55.733586  213442 system_pods.go:89] "csi-hostpathplugin-thdxj" [7bd2ab76-e381-4037-bca4-75f52426f1fd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0926 22:30:55.733597  213442 system_pods.go:89] "etcd-addons-341571" [46828c2d-9348-47fe-8c4e-af8b24e30cc0] Running
	I0926 22:30:55.733603  213442 system_pods.go:89] "kindnet-qckxx" [4b7f8410-66c4-4e04-bff8-cc3beb2a4d42] Running
	I0926 22:30:55.733611  213442 system_pods.go:89] "kube-apiserver-addons-341571" [1f14818e-648d-477f-a099-33e60f4b2b9b] Running
	I0926 22:30:55.733617  213442 system_pods.go:89] "kube-controller-manager-addons-341571" [570402f2-bc85-43ec-8745-a69279070321] Running
	I0926 22:30:55.733628  213442 system_pods.go:89] "kube-ingress-dns-minikube" [bc1f29c4-b6c1-4daa-9fcd-50101584ff4a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0926 22:30:55.733638  213442 system_pods.go:89] "kube-proxy-qlkkx" [efd79ac0-eabc-4122-9f3f-f4a201bae1c4] Running
	I0926 22:30:55.733646  213442 system_pods.go:89] "kube-scheduler-addons-341571" [4b80f23a-724f-42e4-ad35-444631095085] Running
	I0926 22:30:55.733657  213442 system_pods.go:89] "metrics-server-85b7d694d7-t56ww" [6a75dcb7-123e-4f50-8ff9-89899540cfc4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0926 22:30:55.733676  213442 system_pods.go:89] "nvidia-device-plugin-daemonset-8j546" [471abb6b-3a33-47a4-96ae-a4fd74359b94] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0926 22:30:55.733692  213442 system_pods.go:89] "registry-66898fdd98-ctjkw" [8a0f3fc0-2d0c-464a-9a56-61049fbb2336] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0926 22:30:55.733701  213442 system_pods.go:89] "registry-creds-764b6fb674-kzkvl" [fc99e324-fce1-4919-a021-73f9da278ad2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0926 22:30:55.733717  213442 system_pods.go:89] "registry-proxy-tzv57" [d80fb770-8ba3-4d49-9f0e-c1525a809861] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0926 22:30:55.733725  213442 system_pods.go:89] "snapshot-controller-7d9fbc56b8-r8ldh" [fb361264-5548-46d4-970d-a4a33429ba2f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0926 22:30:55.733734  213442 system_pods.go:89] "snapshot-controller-7d9fbc56b8-s54k8" [ec65a6b7-a617-45ca-b19d-9ad6839c2289] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0926 22:30:55.733741  213442 system_pods.go:89] "storage-provisioner" [6a44aa85-9420-4cae-aa3a-fec9bb4baf43] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0926 22:30:55.733761  213442 retry.go:31] will retry after 226.438859ms: missing components: kube-dns
	I0926 22:30:55.826369  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:55.965432  213442 system_pods.go:86] 20 kube-system pods found
	I0926 22:30:55.965474  213442 system_pods.go:89] "amd-gpu-device-plugin-5w42z" [286d2d17-a9e5-4aed-8459-4513ce47bc96] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0926 22:30:55.965485  213442 system_pods.go:89] "coredns-66bc5c9577-6lgt2" [834c5a6c-dd34-425e-9aaf-cebe60b266f1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:30:55.965495  213442 system_pods.go:89] "csi-hostpath-attacher-0" [876d9d95-8c11-4574-bb90-e310553cad36] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0926 22:30:55.965502  213442 system_pods.go:89] "csi-hostpath-resizer-0" [a56bc47e-8278-4afc-92aa-aefa96c69f20] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0926 22:30:55.965511  213442 system_pods.go:89] "csi-hostpathplugin-thdxj" [7bd2ab76-e381-4037-bca4-75f52426f1fd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0926 22:30:55.965524  213442 system_pods.go:89] "etcd-addons-341571" [46828c2d-9348-47fe-8c4e-af8b24e30cc0] Running
	I0926 22:30:55.965531  213442 system_pods.go:89] "kindnet-qckxx" [4b7f8410-66c4-4e04-bff8-cc3beb2a4d42] Running
	I0926 22:30:55.965540  213442 system_pods.go:89] "kube-apiserver-addons-341571" [1f14818e-648d-477f-a099-33e60f4b2b9b] Running
	I0926 22:30:55.965548  213442 system_pods.go:89] "kube-controller-manager-addons-341571" [570402f2-bc85-43ec-8745-a69279070321] Running
	I0926 22:30:55.965562  213442 system_pods.go:89] "kube-ingress-dns-minikube" [bc1f29c4-b6c1-4daa-9fcd-50101584ff4a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0926 22:30:55.965569  213442 system_pods.go:89] "kube-proxy-qlkkx" [efd79ac0-eabc-4122-9f3f-f4a201bae1c4] Running
	I0926 22:30:55.965577  213442 system_pods.go:89] "kube-scheduler-addons-341571" [4b80f23a-724f-42e4-ad35-444631095085] Running
	I0926 22:30:55.965588  213442 system_pods.go:89] "metrics-server-85b7d694d7-t56ww" [6a75dcb7-123e-4f50-8ff9-89899540cfc4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0926 22:30:55.965596  213442 system_pods.go:89] "nvidia-device-plugin-daemonset-8j546" [471abb6b-3a33-47a4-96ae-a4fd74359b94] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0926 22:30:55.965607  213442 system_pods.go:89] "registry-66898fdd98-ctjkw" [8a0f3fc0-2d0c-464a-9a56-61049fbb2336] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0926 22:30:55.965615  213442 system_pods.go:89] "registry-creds-764b6fb674-kzkvl" [fc99e324-fce1-4919-a021-73f9da278ad2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0926 22:30:55.965743  213442 system_pods.go:89] "registry-proxy-tzv57" [d80fb770-8ba3-4d49-9f0e-c1525a809861] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0926 22:30:55.965755  213442 system_pods.go:89] "snapshot-controller-7d9fbc56b8-r8ldh" [fb361264-5548-46d4-970d-a4a33429ba2f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0926 22:30:55.965768  213442 system_pods.go:89] "snapshot-controller-7d9fbc56b8-s54k8" [ec65a6b7-a617-45ca-b19d-9ad6839c2289] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0926 22:30:55.965789  213442 system_pods.go:89] "storage-provisioner" [6a44aa85-9420-4cae-aa3a-fec9bb4baf43] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0926 22:30:55.965816  213442 retry.go:31] will retry after 305.209056ms: missing components: kube-dns
	I0926 22:30:56.123377  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:56.159246  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:56.159388  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:56.283232  213442 system_pods.go:86] 20 kube-system pods found
	I0926 22:30:56.283277  213442 system_pods.go:89] "amd-gpu-device-plugin-5w42z" [286d2d17-a9e5-4aed-8459-4513ce47bc96] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0926 22:30:56.283289  213442 system_pods.go:89] "coredns-66bc5c9577-6lgt2" [834c5a6c-dd34-425e-9aaf-cebe60b266f1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 22:30:56.283299  213442 system_pods.go:89] "csi-hostpath-attacher-0" [876d9d95-8c11-4574-bb90-e310553cad36] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0926 22:30:56.283308  213442 system_pods.go:89] "csi-hostpath-resizer-0" [a56bc47e-8278-4afc-92aa-aefa96c69f20] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0926 22:30:56.283316  213442 system_pods.go:89] "csi-hostpathplugin-thdxj" [7bd2ab76-e381-4037-bca4-75f52426f1fd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0926 22:30:56.283321  213442 system_pods.go:89] "etcd-addons-341571" [46828c2d-9348-47fe-8c4e-af8b24e30cc0] Running
	I0926 22:30:56.283329  213442 system_pods.go:89] "kindnet-qckxx" [4b7f8410-66c4-4e04-bff8-cc3beb2a4d42] Running
	I0926 22:30:56.283334  213442 system_pods.go:89] "kube-apiserver-addons-341571" [1f14818e-648d-477f-a099-33e60f4b2b9b] Running
	I0926 22:30:56.283342  213442 system_pods.go:89] "kube-controller-manager-addons-341571" [570402f2-bc85-43ec-8745-a69279070321] Running
	I0926 22:30:56.283349  213442 system_pods.go:89] "kube-ingress-dns-minikube" [bc1f29c4-b6c1-4daa-9fcd-50101584ff4a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0926 22:30:56.283354  213442 system_pods.go:89] "kube-proxy-qlkkx" [efd79ac0-eabc-4122-9f3f-f4a201bae1c4] Running
	I0926 22:30:56.283360  213442 system_pods.go:89] "kube-scheduler-addons-341571" [4b80f23a-724f-42e4-ad35-444631095085] Running
	I0926 22:30:56.283367  213442 system_pods.go:89] "metrics-server-85b7d694d7-t56ww" [6a75dcb7-123e-4f50-8ff9-89899540cfc4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0926 22:30:56.283375  213442 system_pods.go:89] "nvidia-device-plugin-daemonset-8j546" [471abb6b-3a33-47a4-96ae-a4fd74359b94] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0926 22:30:56.283383  213442 system_pods.go:89] "registry-66898fdd98-ctjkw" [8a0f3fc0-2d0c-464a-9a56-61049fbb2336] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0926 22:30:56.283390  213442 system_pods.go:89] "registry-creds-764b6fb674-kzkvl" [fc99e324-fce1-4919-a021-73f9da278ad2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0926 22:30:56.283397  213442 system_pods.go:89] "registry-proxy-tzv57" [d80fb770-8ba3-4d49-9f0e-c1525a809861] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0926 22:30:56.283408  213442 system_pods.go:89] "snapshot-controller-7d9fbc56b8-r8ldh" [fb361264-5548-46d4-970d-a4a33429ba2f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0926 22:30:56.283416  213442 system_pods.go:89] "snapshot-controller-7d9fbc56b8-s54k8" [ec65a6b7-a617-45ca-b19d-9ad6839c2289] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0926 22:30:56.283423  213442 system_pods.go:89] "storage-provisioner" [6a44aa85-9420-4cae-aa3a-fec9bb4baf43] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0926 22:30:56.283442  213442 retry.go:31] will retry after 310.765229ms: missing components: kube-dns
	I0926 22:30:56.285504  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:56.600034  213442 system_pods.go:86] 20 kube-system pods found
	I0926 22:30:56.600078  213442 system_pods.go:89] "amd-gpu-device-plugin-5w42z" [286d2d17-a9e5-4aed-8459-4513ce47bc96] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0926 22:30:56.600098  213442 system_pods.go:89] "coredns-66bc5c9577-6lgt2" [834c5a6c-dd34-425e-9aaf-cebe60b266f1] Running
	I0926 22:30:56.600110  213442 system_pods.go:89] "csi-hostpath-attacher-0" [876d9d95-8c11-4574-bb90-e310553cad36] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0926 22:30:56.600118  213442 system_pods.go:89] "csi-hostpath-resizer-0" [a56bc47e-8278-4afc-92aa-aefa96c69f20] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0926 22:30:56.600129  213442 system_pods.go:89] "csi-hostpathplugin-thdxj" [7bd2ab76-e381-4037-bca4-75f52426f1fd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0926 22:30:56.600138  213442 system_pods.go:89] "etcd-addons-341571" [46828c2d-9348-47fe-8c4e-af8b24e30cc0] Running
	I0926 22:30:56.600145  213442 system_pods.go:89] "kindnet-qckxx" [4b7f8410-66c4-4e04-bff8-cc3beb2a4d42] Running
	I0926 22:30:56.600152  213442 system_pods.go:89] "kube-apiserver-addons-341571" [1f14818e-648d-477f-a099-33e60f4b2b9b] Running
	I0926 22:30:56.600156  213442 system_pods.go:89] "kube-controller-manager-addons-341571" [570402f2-bc85-43ec-8745-a69279070321] Running
	I0926 22:30:56.600162  213442 system_pods.go:89] "kube-ingress-dns-minikube" [bc1f29c4-b6c1-4daa-9fcd-50101584ff4a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0926 22:30:56.600166  213442 system_pods.go:89] "kube-proxy-qlkkx" [efd79ac0-eabc-4122-9f3f-f4a201bae1c4] Running
	I0926 22:30:56.600173  213442 system_pods.go:89] "kube-scheduler-addons-341571" [4b80f23a-724f-42e4-ad35-444631095085] Running
	I0926 22:30:56.600179  213442 system_pods.go:89] "metrics-server-85b7d694d7-t56ww" [6a75dcb7-123e-4f50-8ff9-89899540cfc4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0926 22:30:56.600187  213442 system_pods.go:89] "nvidia-device-plugin-daemonset-8j546" [471abb6b-3a33-47a4-96ae-a4fd74359b94] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0926 22:30:56.600195  213442 system_pods.go:89] "registry-66898fdd98-ctjkw" [8a0f3fc0-2d0c-464a-9a56-61049fbb2336] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0926 22:30:56.600201  213442 system_pods.go:89] "registry-creds-764b6fb674-kzkvl" [fc99e324-fce1-4919-a021-73f9da278ad2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0926 22:30:56.600216  213442 system_pods.go:89] "registry-proxy-tzv57" [d80fb770-8ba3-4d49-9f0e-c1525a809861] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0926 22:30:56.600224  213442 system_pods.go:89] "snapshot-controller-7d9fbc56b8-r8ldh" [fb361264-5548-46d4-970d-a4a33429ba2f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0926 22:30:56.600230  213442 system_pods.go:89] "snapshot-controller-7d9fbc56b8-s54k8" [ec65a6b7-a617-45ca-b19d-9ad6839c2289] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0926 22:30:56.600236  213442 system_pods.go:89] "storage-provisioner" [6a44aa85-9420-4cae-aa3a-fec9bb4baf43] Running
	I0926 22:30:56.600244  213442 system_pods.go:126] duration metric: took 871.168161ms to wait for k8s-apps to be running ...
	I0926 22:30:56.600255  213442 system_svc.go:44] waiting for kubelet service to be running ....
	I0926 22:30:56.600304  213442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 22:30:56.614872  213442 system_svc.go:56] duration metric: took 14.605252ms WaitForService to wait for kubelet
	I0926 22:30:56.614907  213442 kubeadm.go:586] duration metric: took 42.711738099s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 22:30:56.614928  213442 node_conditions.go:102] verifying NodePressure condition ...
	I0926 22:30:56.617885  213442 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0926 22:30:56.617919  213442 node_conditions.go:123] node cpu capacity is 8
	I0926 22:30:56.617935  213442 node_conditions.go:105] duration metric: took 3.001117ms to run NodePressure ...
	I0926 22:30:56.617952  213442 start.go:241] waiting for startup goroutines ...
	I0926 22:30:56.622800  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:56.658844  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:56.658897  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:56.782842  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:57.124078  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:57.159324  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:57.159388  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:57.283860  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:57.623754  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:57.659267  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:57.660982  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:57.782806  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:58.123650  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:58.158499  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:58.158754  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:58.283468  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:58.623328  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:58.659418  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:58.659866  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:58.784423  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:59.123734  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:59.158695  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:59.158728  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:59.283909  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:30:59.623116  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:30:59.658874  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:30:59.658944  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:30:59.782706  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:00.123261  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:00.159203  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:00.159241  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:00.283464  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:00.622806  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:00.658922  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:00.658986  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:00.783103  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:01.141860  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:01.242742  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:01.242780  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:01.283171  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:01.624022  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:01.659289  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:01.659474  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:01.783587  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:02.123331  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:02.159328  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:02.159387  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:02.283258  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:02.623156  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:02.659380  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:02.659386  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:02.783645  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:03.124120  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:03.159127  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:03.159243  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:03.283248  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:03.623258  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:03.659207  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:03.659238  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:03.783220  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:04.122956  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:04.158995  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:04.159027  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:04.282764  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:04.623047  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:04.723724  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:04.723854  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:04.783569  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:05.122870  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:05.159155  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:05.159239  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:05.283402  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:05.623421  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:05.659279  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:05.659391  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:05.784024  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:06.123931  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:06.158547  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:06.158662  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:06.283248  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:06.623350  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:06.659412  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:06.659444  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:06.783571  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:06.982901  213442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:31:07.122846  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:07.158848  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:07.158897  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:07.282706  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0926 22:31:07.591501  213442 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:31:07.591537  213442 retry.go:31] will retry after 18.80152809s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:31:07.623052  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:07.658996  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:07.659169  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:07.783668  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:08.126230  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:08.161310  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:08.161895  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:08.283291  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:08.623900  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:08.658536  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:08.658621  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:08.783497  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:09.122554  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:09.158589  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:09.158634  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:09.283393  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:09.623280  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:09.659364  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:09.659560  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:09.783023  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:10.124132  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:10.159216  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:10.159242  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:10.283103  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:10.623491  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:10.658532  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:10.658632  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:10.783824  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:11.123399  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:11.159398  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:11.159444  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:11.283437  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:11.623055  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:11.659358  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:11.659405  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:11.783527  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:12.129174  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:12.159258  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:12.159402  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:12.283491  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:12.688077  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:12.688077  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:12.688171  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:12.783975  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:13.123656  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:13.158694  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:13.158691  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:13.284607  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:13.623180  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:13.659642  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:13.659731  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:13.782884  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:14.123813  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:14.158683  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:14.158798  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:14.283827  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:14.623681  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:14.658618  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:14.658770  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:14.783757  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:15.123233  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:15.158946  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:15.159107  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:15.282958  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:15.623683  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:15.658584  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:15.658609  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:15.783344  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:16.123134  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:16.159172  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:16.159173  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:16.282946  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:16.623505  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:16.658486  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:16.658503  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:16.783774  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:17.123416  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:17.159293  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:17.159385  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:17.283391  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:17.623294  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:17.659422  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:17.659463  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:17.783538  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:18.123454  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:18.159296  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:18.159310  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:18.283289  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:18.622731  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:18.659022  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:18.659022  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:18.783775  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:19.123416  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:19.158857  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:19.159048  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:19.282995  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:19.623010  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:19.723285  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:19.723327  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:19.782969  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:20.124413  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:20.158931  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:20.158958  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:20.283616  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:20.622751  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:20.658564  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:20.658571  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:20.783386  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:21.122638  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:21.158473  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:21.158538  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:21.283459  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:21.623445  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:21.659323  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:21.659539  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:21.783480  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:22.128583  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:22.160194  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:22.160397  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:22.286840  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:22.623340  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:22.659196  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:22.659252  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:22.783509  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:23.122716  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:23.158724  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:23.158792  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:23.283202  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:23.623294  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:23.658840  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:23.658919  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:23.783060  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:24.122692  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:24.158682  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:24.158756  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:24.283720  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:24.623324  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:24.659376  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:24.659566  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:24.783828  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:25.123429  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:25.159586  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:25.159626  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:25.283622  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:25.623707  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:25.658894  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:25.658937  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:25.782945  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:26.123621  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:26.158277  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:26.158422  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:26.288934  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:26.394062  213442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0926 22:31:26.622873  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:26.658456  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:26.658951  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:26.783134  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0926 22:31:27.004077  213442 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:31:27.004132  213442 retry.go:31] will retry after 42.270410356s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0926 22:31:27.123748  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:27.158677  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:27.158701  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:27.283990  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:27.624211  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:27.659078  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:27.659267  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:27.783214  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:28.123581  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:28.158354  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:28.158602  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:28.475553  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:28.622969  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:28.659727  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:28.660021  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:28.783331  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:29.122899  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:29.158876  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:29.158948  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:29.282719  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:29.623967  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:29.658797  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:29.658918  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:29.783870  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:30.123808  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:30.158701  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:30.158709  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:30.283446  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:30.623595  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:30.658912  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:30.659055  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:30.783228  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:31.122919  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:31.158547  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:31.158594  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:31.283811  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:31.623571  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:31.659434  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:31.659524  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:31.783511  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:32.127753  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:32.158597  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 22:31:32.158633  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:32.283558  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:32.623358  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:32.659236  213442 kapi.go:107] duration metric: took 1m17.003914344s to wait for kubernetes.io/minikube-addons=registry ...
	I0926 22:31:32.659244  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:32.783305  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:33.122867  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:33.158832  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:33.283712  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:33.626914  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:33.658819  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:33.783857  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:34.123609  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:34.159499  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:34.283047  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:34.623388  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:34.659582  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:34.783509  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:35.123449  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:35.159507  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:35.292190  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:35.622364  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:35.659947  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:35.783949  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:36.123695  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:36.159738  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:36.283533  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:36.623713  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:36.659829  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:36.784027  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:37.123214  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:37.159118  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:37.283273  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:37.623241  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:37.659218  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:37.783498  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:38.122944  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:38.158942  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:38.282544  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:38.623659  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:38.659386  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:38.783166  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:39.123510  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:39.159565  213442 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 22:31:39.283276  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:39.623326  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:39.658689  213442 kapi.go:107] duration metric: took 1m24.003084725s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0926 22:31:39.783564  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:40.123222  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:40.283796  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:40.624210  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:40.783908  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:41.123532  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:41.283499  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:41.623010  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:41.782418  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:42.123793  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:42.284565  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 22:31:42.623284  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:42.783617  213442 kapi.go:107] duration metric: took 1m20.503837961s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0926 22:31:42.785369  213442 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-341571 cluster.
	I0926 22:31:42.786458  213442 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0926 22:31:42.787669  213442 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0926 22:31:43.123137  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:43.623539  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:44.123484  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:44.624882  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:45.123615  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:45.623978  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:46.123523  213442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 22:31:46.623337  213442 kapi.go:107] duration metric: took 1m30.504094841s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0926 22:32:09.279173  213442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0926 22:32:09.830414  213442 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0926 22:32:09.830548  213442 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0926 22:32:09.832156  213442 out.go:179] * Enabled addons: storage-provisioner, ingress-dns, nvidia-device-plugin, amd-gpu-device-plugin, registry-creds, cloud-spanner, metrics-server, default-storageclass, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0926 22:32:09.833312  213442 addons.go:514] duration metric: took 1m55.930123896s for enable addons: enabled=[storage-provisioner ingress-dns nvidia-device-plugin amd-gpu-device-plugin registry-creds cloud-spanner metrics-server default-storageclass yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0926 22:32:09.833356  213442 start.go:246] waiting for cluster config update ...
	I0926 22:32:09.833381  213442 start.go:255] writing updated cluster config ...
	I0926 22:32:09.833633  213442 ssh_runner.go:195] Run: rm -f paused
	I0926 22:32:09.837528  213442 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0926 22:32:09.841025  213442 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6lgt2" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:32:09.845199  213442 pod_ready.go:94] pod "coredns-66bc5c9577-6lgt2" is "Ready"
	I0926 22:32:09.845220  213442 pod_ready.go:86] duration metric: took 4.170651ms for pod "coredns-66bc5c9577-6lgt2" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:32:09.847013  213442 pod_ready.go:83] waiting for pod "etcd-addons-341571" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:32:09.850605  213442 pod_ready.go:94] pod "etcd-addons-341571" is "Ready"
	I0926 22:32:09.850621  213442 pod_ready.go:86] duration metric: took 3.589418ms for pod "etcd-addons-341571" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:32:09.852346  213442 pod_ready.go:83] waiting for pod "kube-apiserver-addons-341571" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:32:09.855859  213442 pod_ready.go:94] pod "kube-apiserver-addons-341571" is "Ready"
	I0926 22:32:09.855877  213442 pod_ready.go:86] duration metric: took 3.515461ms for pod "kube-apiserver-addons-341571" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:32:09.857567  213442 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-341571" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:32:10.242378  213442 pod_ready.go:94] pod "kube-controller-manager-addons-341571" is "Ready"
	I0926 22:32:10.242406  213442 pod_ready.go:86] duration metric: took 384.82402ms for pod "kube-controller-manager-addons-341571" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:32:10.441774  213442 pod_ready.go:83] waiting for pod "kube-proxy-qlkkx" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:32:10.841128  213442 pod_ready.go:94] pod "kube-proxy-qlkkx" is "Ready"
	I0926 22:32:10.841159  213442 pod_ready.go:86] duration metric: took 399.358903ms for pod "kube-proxy-qlkkx" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:32:11.041660  213442 pod_ready.go:83] waiting for pod "kube-scheduler-addons-341571" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:32:11.441892  213442 pod_ready.go:94] pod "kube-scheduler-addons-341571" is "Ready"
	I0926 22:32:11.441919  213442 pod_ready.go:86] duration metric: took 400.231421ms for pod "kube-scheduler-addons-341571" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 22:32:11.441931  213442 pod_ready.go:40] duration metric: took 1.604371126s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0926 22:32:11.487200  213442 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0926 22:32:11.488784  213442 out.go:179] * Done! kubectl is now configured to use "addons-341571" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 26 22:35:18 addons-341571 crio[933]: time="2025-09-26 22:35:18.973638102Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-27wqf/POD" id=06e36892-5ad6-4e11-abf5-96eeea286514 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 26 22:35:18 addons-341571 crio[933]: time="2025-09-26 22:35:18.973727245Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 26 22:35:18 addons-341571 crio[933]: time="2025-09-26 22:35:18.992113932Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-27wqf Namespace:default ID:e3fea1df8b358863315ecb2eaf5d8d8861d1028cb6e65744751f69c4ba42e9fd UID:9aa32c1f-420b-4afc-8139-77b77e867acf NetNS:/var/run/netns/ee9040d6-54f2-4f6e-a6da-87d6a0066b49 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 26 22:35:18 addons-341571 crio[933]: time="2025-09-26 22:35:18.992155513Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-27wqf to CNI network \"kindnet\" (type=ptp)"
	Sep 26 22:35:19 addons-341571 crio[933]: time="2025-09-26 22:35:19.003371439Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-27wqf Namespace:default ID:e3fea1df8b358863315ecb2eaf5d8d8861d1028cb6e65744751f69c4ba42e9fd UID:9aa32c1f-420b-4afc-8139-77b77e867acf NetNS:/var/run/netns/ee9040d6-54f2-4f6e-a6da-87d6a0066b49 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 26 22:35:19 addons-341571 crio[933]: time="2025-09-26 22:35:19.003542579Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-27wqf for CNI network kindnet (type=ptp)"
	Sep 26 22:35:19 addons-341571 crio[933]: time="2025-09-26 22:35:19.004362365Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Sep 26 22:35:19 addons-341571 crio[933]: time="2025-09-26 22:35:19.005197950Z" level=info msg="Ran pod sandbox e3fea1df8b358863315ecb2eaf5d8d8861d1028cb6e65744751f69c4ba42e9fd with infra container: default/hello-world-app-5d498dc89-27wqf/POD" id=06e36892-5ad6-4e11-abf5-96eeea286514 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 26 22:35:19 addons-341571 crio[933]: time="2025-09-26 22:35:19.006465461Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=ef6ad14d-04e6-440b-8590-c5055ec837c6 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:35:19 addons-341571 crio[933]: time="2025-09-26 22:35:19.006711934Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=ef6ad14d-04e6-440b-8590-c5055ec837c6 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:35:19 addons-341571 crio[933]: time="2025-09-26 22:35:19.007320685Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=873a3671-95e7-4347-8129-58769461099f name=/runtime.v1.ImageService/PullImage
	Sep 26 22:35:19 addons-341571 crio[933]: time="2025-09-26 22:35:19.011524899Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Sep 26 22:35:19 addons-341571 crio[933]: time="2025-09-26 22:35:19.168080929Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Sep 26 22:35:19 addons-341571 crio[933]: time="2025-09-26 22:35:19.549308812Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6" id=873a3671-95e7-4347-8129-58769461099f name=/runtime.v1.ImageService/PullImage
	Sep 26 22:35:19 addons-341571 crio[933]: time="2025-09-26 22:35:19.550053949Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=b13c8eef-4843-4571-ae59-221880377d21 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:35:19 addons-341571 crio[933]: time="2025-09-26 22:35:19.550839200Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86],Size_:4944818,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=b13c8eef-4843-4571-ae59-221880377d21 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:35:19 addons-341571 crio[933]: time="2025-09-26 22:35:19.551687718Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=e1586811-37de-435e-8796-13066e5546a3 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:35:19 addons-341571 crio[933]: time="2025-09-26 22:35:19.552485117Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86],Size_:4944818,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=e1586811-37de-435e-8796-13066e5546a3 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:35:19 addons-341571 crio[933]: time="2025-09-26 22:35:19.555934905Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-27wqf/hello-world-app" id=33e043a7-2cd7-45df-bf69-a2cc39240a68 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 26 22:35:19 addons-341571 crio[933]: time="2025-09-26 22:35:19.556037678Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 26 22:35:19 addons-341571 crio[933]: time="2025-09-26 22:35:19.575179585Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/fcd37c90404694f4c37d530b3a18f3c0e61fbced4dc3ac64dc61c921a13029c8/merged/etc/passwd: no such file or directory"
	Sep 26 22:35:19 addons-341571 crio[933]: time="2025-09-26 22:35:19.575229198Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/fcd37c90404694f4c37d530b3a18f3c0e61fbced4dc3ac64dc61c921a13029c8/merged/etc/group: no such file or directory"
	Sep 26 22:35:19 addons-341571 crio[933]: time="2025-09-26 22:35:19.646552083Z" level=info msg="Created container 7fdc930afd6725924266a978714d24a9d872511d13601015f3a9ccf890e41a11: default/hello-world-app-5d498dc89-27wqf/hello-world-app" id=33e043a7-2cd7-45df-bf69-a2cc39240a68 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 26 22:35:19 addons-341571 crio[933]: time="2025-09-26 22:35:19.647123131Z" level=info msg="Starting container: 7fdc930afd6725924266a978714d24a9d872511d13601015f3a9ccf890e41a11" id=e50730a0-aa15-4eba-8a8b-3629cc1f144c name=/runtime.v1.RuntimeService/StartContainer
	Sep 26 22:35:19 addons-341571 crio[933]: time="2025-09-26 22:35:19.653680696Z" level=info msg="Started container" PID=12421 containerID=7fdc930afd6725924266a978714d24a9d872511d13601015f3a9ccf890e41a11 description=default/hello-world-app-5d498dc89-27wqf/hello-world-app id=e50730a0-aa15-4eba-8a8b-3629cc1f144c name=/runtime.v1.RuntimeService/StartContainer sandboxID=e3fea1df8b358863315ecb2eaf5d8d8861d1028cb6e65744751f69c4ba42e9fd
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	7fdc930afd672       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   e3fea1df8b358       hello-world-app-5d498dc89-27wqf
	24ed8c9227cf2       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                              2 minutes ago            Running             nginx                     0                   465c0a067f1a9       nginx
	8b07ee630bbda       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago            Running             busybox                   0                   2189dd08110e4       busybox
	f260c22bfdbfe       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             3 minutes ago            Running             controller                0                   32d8d35017b15       ingress-nginx-controller-9cc49f96f-kppfr
	cdfbd1616c416       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5            3 minutes ago            Running             gadget                    0                   cacca8cad3582       gadget-nn52s
	864379c0c8c25       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   4 minutes ago            Exited              patch                     0                   78d4d16daffeb       ingress-nginx-admission-patch-mqbh9
	05c018494f5ce       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   4 minutes ago            Exited              create                    0                   3346b7ba84ed6       ingress-nginx-admission-create-fxs8m
	4fc755116f0e9       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago            Running             minikube-ingress-dns      0                   79e8ee916065a       kube-ingress-dns-minikube
	9efa64a534c1a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago            Running             storage-provisioner       0                   5f32dc6432730       storage-provisioner
	9614ee86500a1       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago            Running             coredns                   0                   3a8e579fa4010       coredns-66bc5c9577-6lgt2
	7f13c1f9ea0b7       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                             5 minutes ago            Running             kube-proxy                0                   b76d9062490f9       kube-proxy-qlkkx
	38dec3bce1fb8       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                             5 minutes ago            Running             kindnet-cni               0                   ffaf93ba0c551       kindnet-qckxx
	30730a16d0a9d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             5 minutes ago            Running             etcd                      0                   184ff6fa0382a       etcd-addons-341571
	19f1deafde965       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                             5 minutes ago            Running             kube-apiserver            0                   856b64da4ad96       kube-apiserver-addons-341571
	faa8d9d363a86       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                             5 minutes ago            Running             kube-controller-manager   0                   321e0f9a7a042       kube-controller-manager-addons-341571
	624552694fc10       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                             5 minutes ago            Running             kube-scheduler            0                   e3f260ffe3a55       kube-scheduler-addons-341571
	
	
	==> coredns [9614ee86500a15b4276c537e0fd2319ace3e095369d3c3c17c0df0a704ab73b9] <==
	[INFO] 10.244.0.19:58244 - 41473 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000136227s
	[INFO] 10.244.0.19:53108 - 36410 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000048035s
	[INFO] 10.244.0.19:53108 - 35920 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000069969s
	[INFO] 10.244.0.19:57174 - 60752 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000056518s
	[INFO] 10.244.0.19:57174 - 61002 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000108883s
	[INFO] 10.244.0.19:58242 - 40748 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000108207s
	[INFO] 10.244.0.19:58242 - 40526 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000127832s
	[INFO] 10.244.0.22:34917 - 32980 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000261671s
	[INFO] 10.244.0.22:53532 - 42838 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000358001s
	[INFO] 10.244.0.22:60710 - 43721 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000169352s
	[INFO] 10.244.0.22:37673 - 13374 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000166367s
	[INFO] 10.244.0.22:54278 - 50326 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00012211s
	[INFO] 10.244.0.22:60907 - 6186 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000163565s
	[INFO] 10.244.0.22:58231 - 20292 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.004601743s
	[INFO] 10.244.0.22:34146 - 38937 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.004907734s
	[INFO] 10.244.0.22:36543 - 27469 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.005131993s
	[INFO] 10.244.0.22:47772 - 59177 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.00591275s
	[INFO] 10.244.0.22:60843 - 52617 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006774009s
	[INFO] 10.244.0.22:38373 - 9806 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.007142391s
	[INFO] 10.244.0.22:37696 - 45876 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005324998s
	[INFO] 10.244.0.22:49556 - 36480 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005640509s
	[INFO] 10.244.0.22:43915 - 32934 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001036813s
	[INFO] 10.244.0.22:60764 - 53699 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002342264s
	[INFO] 10.244.0.29:39680 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000234926s
	[INFO] 10.244.0.29:54846 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000203736s
	
	
	==> describe nodes <==
	Name:               addons-341571
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-341571
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47
	                    minikube.k8s.io/name=addons-341571
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_26T22_30_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-341571
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 26 Sep 2025 22:30:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-341571
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 26 Sep 2025 22:35:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 26 Sep 2025 22:33:11 +0000   Fri, 26 Sep 2025 22:30:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 26 Sep 2025 22:33:11 +0000   Fri, 26 Sep 2025 22:30:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 26 Sep 2025 22:33:11 +0000   Fri, 26 Sep 2025 22:30:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 26 Sep 2025 22:33:11 +0000   Fri, 26 Sep 2025 22:30:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-341571
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 53e66581e4294f589f1abed34f56a351
	  System UUID:                3fbf5685-c057-4914-8aa2-e15ee866c34a
	  Boot ID:                    ce94279f-6745-4217-952d-f9fda1755c22
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	  default                     hello-world-app-5d498dc89-27wqf             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  gadget                      gadget-nn52s                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-kppfr    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         5m5s
	  kube-system                 coredns-66bc5c9577-6lgt2                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     5m6s
	  kube-system                 etcd-addons-341571                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         5m12s
	  kube-system                 kindnet-qckxx                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      5m6s
	  kube-system                 kube-apiserver-addons-341571                250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m12s
	  kube-system                 kube-controller-manager-addons-341571       200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m12s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 kube-proxy-qlkkx                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 kube-scheduler-addons-341571                100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m13s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             310Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m4s   kube-proxy       
	  Normal  Starting                 5m12s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m12s  kubelet          Node addons-341571 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m12s  kubelet          Node addons-341571 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m12s  kubelet          Node addons-341571 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m8s   node-controller  Node addons-341571 event: Registered Node addons-341571 in Controller
	  Normal  NodeReady                4m25s  kubelet          Node addons-341571 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.088607] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025515] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.894785] kauditd_printk_skb: 47 callbacks suppressed
	[Sep26 22:33] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5e 41 1c 6c 94 13 96 ca 2b d2 b8 98 08 00
	[  +1.003220] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 5e 41 1c 6c 94 13 96 ca 2b d2 b8 98 08 00
	[  +1.023896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 5e 41 1c 6c 94 13 96 ca 2b d2 b8 98 08 00
	[  +1.023850] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 5e 41 1c 6c 94 13 96 ca 2b d2 b8 98 08 00
	[  +1.023878] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 5e 41 1c 6c 94 13 96 ca 2b d2 b8 98 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 5e 41 1c 6c 94 13 96 ca 2b d2 b8 98 08 00
	[  +2.048746] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5e 41 1c 6c 94 13 96 ca 2b d2 b8 98 08 00
	[  +4.030628] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 5e 41 1c 6c 94 13 96 ca 2b d2 b8 98 08 00
	[  +8.319153] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 5e 41 1c 6c 94 13 96 ca 2b d2 b8 98 08 00
	[ +16.382271] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5e 41 1c 6c 94 13 96 ca 2b d2 b8 98 08 00
	[Sep26 22:34] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000033] ll header: 00000000: 5e 41 1c 6c 94 13 96 ca 2b d2 b8 98 08 00
	
	
	==> etcd [30730a16d0a9d60a858ab93085149555428833581a3c7c9013a4829042f26170] <==
	{"level":"warn","ts":"2025-09-26T22:30:05.336947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:05.343808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:05.351054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:05.357309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:05.364149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:05.371309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:05.377408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:05.384368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:05.390322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:05.396600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:05.402312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:05.419098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:05.425376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:05.431444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:05.478338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:42.871457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:42.878562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:42.896745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:30:42.903256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43208","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-26T22:31:26.259021Z","caller":"traceutil/trace.go:172","msg":"trace[1220148263] transaction","detail":"{read_only:false; response_revision:1119; number_of_response:1; }","duration":"100.706039ms","start":"2025-09-26T22:31:26.158292Z","end":"2025-09-26T22:31:26.258998Z","steps":["trace[1220148263] 'process raft request'  (duration: 100.501076ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:31:28.473373Z","caller":"traceutil/trace.go:172","msg":"trace[1837610952] linearizableReadLoop","detail":"{readStateIndex:1156; appliedIndex:1156; }","duration":"191.265568ms","start":"2025-09-26T22:31:28.282080Z","end":"2025-09-26T22:31:28.473346Z","steps":["trace[1837610952] 'read index received'  (duration: 191.256117ms)","trace[1837610952] 'applied index is now lower than readState.Index'  (duration: 7.614µs)"],"step_count":2}
	{"level":"info","ts":"2025-09-26T22:31:28.473498Z","caller":"traceutil/trace.go:172","msg":"trace[727729620] transaction","detail":"{read_only:false; response_revision:1123; number_of_response:1; }","duration":"208.057021ms","start":"2025-09-26T22:31:28.265423Z","end":"2025-09-26T22:31:28.473480Z","steps":["trace[727729620] 'process raft request'  (duration: 207.946642ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-26T22:31:28.473562Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"191.430388ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-26T22:31:28.473623Z","caller":"traceutil/trace.go:172","msg":"trace[216925862] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1123; }","duration":"191.540535ms","start":"2025-09-26T22:31:28.282070Z","end":"2025-09-26T22:31:28.473611Z","steps":["trace[216925862] 'agreement among raft nodes before linearized reading'  (duration: 191.394102ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T22:31:50.496284Z","caller":"traceutil/trace.go:172","msg":"trace[313632233] transaction","detail":"{read_only:false; response_revision:1230; number_of_response:1; }","duration":"109.088987ms","start":"2025-09-26T22:31:50.387167Z","end":"2025-09-26T22:31:50.496256Z","steps":["trace[313632233] 'process raft request'  (duration: 46.18294ms)","trace[313632233] 'compare'  (duration: 62.782903ms)"],"step_count":2}
	
	
	==> kernel <==
	 22:35:20 up  2:17,  0 users,  load average: 0.27, 21.04, 49.60
	Linux addons-341571 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [38dec3bce1fb8b299adbbf568cdde5b926e92b5ea6fd833bb155b751ff2211b1] <==
	I0926 22:33:15.087394       1 main.go:301] handling current node
	I0926 22:33:25.090014       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:33:25.090059       1 main.go:301] handling current node
	I0926 22:33:35.087753       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:33:35.087795       1 main.go:301] handling current node
	I0926 22:33:45.089350       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:33:45.089386       1 main.go:301] handling current node
	I0926 22:33:55.092333       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:33:55.092380       1 main.go:301] handling current node
	I0926 22:34:05.095433       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:34:05.095480       1 main.go:301] handling current node
	I0926 22:34:15.094535       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:34:15.094576       1 main.go:301] handling current node
	I0926 22:34:25.089184       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:34:25.089231       1 main.go:301] handling current node
	I0926 22:34:35.095296       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:34:35.095331       1 main.go:301] handling current node
	I0926 22:34:45.090292       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:34:45.090329       1 main.go:301] handling current node
	I0926 22:34:55.092681       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:34:55.092728       1 main.go:301] handling current node
	I0926 22:35:05.094187       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:35:05.094224       1 main.go:301] handling current node
	I0926 22:35:15.095299       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:35:15.095348       1 main.go:301] handling current node
	
	
	==> kube-apiserver [19f1deafde965723d5ed95c40c15817379aec6c9c8ff49659775f21adedf1290] <==
	E0926 22:32:21.430753       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:58492: use of closed network connection
	I0926 22:32:30.525572       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.98.231.253"}
	I0926 22:32:49.660523       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:32:51.607766       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0926 22:32:53.367569       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0926 22:32:53.535210       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.95.181"}
	E0926 22:32:53.580545       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0926 22:33:05.431503       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0926 22:33:10.699161       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0926 22:33:10.699209       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0926 22:33:10.715484       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0926 22:33:10.715528       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0926 22:33:10.715633       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0926 22:33:10.731935       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0926 22:33:10.731979       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0926 22:33:10.740851       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0926 22:33:10.740892       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E0926 22:33:11.522216       1 watch.go:272] "Unhandled Error" err="client disconnected" logger="UnhandledError"
	W0926 22:33:11.716589       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0926 22:33:11.741547       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0926 22:33:11.753897       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0926 22:33:33.057037       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:34:17.154267       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:35:02.218028       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:35:18.737117       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.88.133"}
	
	
	==> kube-controller-manager [faa8d9d363a866574d90a5cf4070b37eeda8e844bec2f665042d2d2f5733beaa] <==
	E0926 22:33:20.771642       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:33:20.772883       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I0926 22:33:22.273057       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	E0926 22:33:27.541659       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:33:27.542699       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:33:27.790698       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:33:27.791787       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:33:30.392146       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:33:30.393308       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:33:42.160025       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:33:42.161222       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:33:49.599562       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:33:49.600600       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:33:51.405318       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:33:51.406411       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:34:09.878404       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:34:09.879538       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:34:22.197897       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:34:22.199187       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:34:34.387733       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:34:34.388926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:35:05.571653       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:35:05.572858       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0926 22:35:06.381965       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0926 22:35:06.383177       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [7f13c1f9ea0b756651922849dc647107b42e3a736f30cefe306818b335c90f6c] <==
	I0926 22:30:14.724193       1 server_linux.go:53] "Using iptables proxy"
	I0926 22:30:15.047819       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0926 22:30:15.165128       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0926 22:30:15.165182       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0926 22:30:15.165284       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0926 22:30:15.271377       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0926 22:30:15.271551       1 server_linux.go:132] "Using iptables Proxier"
	I0926 22:30:15.286182       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0926 22:30:15.300495       1 server.go:527] "Version info" version="v1.34.0"
	I0926 22:30:15.300536       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:30:15.302116       1 config.go:200] "Starting service config controller"
	I0926 22:30:15.302138       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0926 22:30:15.302164       1 config.go:106] "Starting endpoint slice config controller"
	I0926 22:30:15.302169       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0926 22:30:15.302190       1 config.go:403] "Starting serviceCIDR config controller"
	I0926 22:30:15.302195       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0926 22:30:15.302948       1 config.go:309] "Starting node config controller"
	I0926 22:30:15.302971       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0926 22:30:15.302978       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0926 22:30:15.402616       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0926 22:30:15.403130       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0926 22:30:15.403151       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [624552694fc100f93dde87eeb3cb158d738173c7d6d63933abd2d99fd443787c] <==
	E0926 22:30:05.885523       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0926 22:30:05.885647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0926 22:30:05.885666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0926 22:30:05.885737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0926 22:30:05.885972       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0926 22:30:05.886016       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0926 22:30:05.886129       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0926 22:30:05.886160       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0926 22:30:05.886271       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0926 22:30:05.886322       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0926 22:30:05.886340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0926 22:30:05.886392       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0926 22:30:05.886438       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0926 22:30:05.886557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0926 22:30:05.886584       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0926 22:30:05.886624       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0926 22:30:06.705469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0926 22:30:06.711744       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0926 22:30:06.767279       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0926 22:30:06.808158       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0926 22:30:06.841261       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0926 22:30:06.910945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0926 22:30:06.919941       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0926 22:30:07.124812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I0926 22:30:09.783816       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 26 22:33:38 addons-341571 kubelet[1546]: E0926 22:33:38.285493    1546 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758926018285235157  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 26 22:33:38 addons-341571 kubelet[1546]: E0926 22:33:38.285532    1546 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758926018285235157  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 26 22:33:48 addons-341571 kubelet[1546]: E0926 22:33:48.287843    1546 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758926028287623683  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 26 22:33:48 addons-341571 kubelet[1546]: E0926 22:33:48.287876    1546 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758926028287623683  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 26 22:33:58 addons-341571 kubelet[1546]: E0926 22:33:58.290018    1546 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758926038289756369  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 26 22:33:58 addons-341571 kubelet[1546]: E0926 22:33:58.290052    1546 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758926038289756369  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 26 22:34:08 addons-341571 kubelet[1546]: E0926 22:34:08.292862    1546 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758926048292610342  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 26 22:34:08 addons-341571 kubelet[1546]: E0926 22:34:08.292906    1546 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758926048292610342  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 26 22:34:18 addons-341571 kubelet[1546]: E0926 22:34:18.294721    1546 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758926058294481222  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 26 22:34:18 addons-341571 kubelet[1546]: E0926 22:34:18.294756    1546 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758926058294481222  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 26 22:34:28 addons-341571 kubelet[1546]: E0926 22:34:28.297732    1546 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758926068297480898  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 26 22:34:28 addons-341571 kubelet[1546]: E0926 22:34:28.297767    1546 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758926068297480898  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 26 22:34:38 addons-341571 kubelet[1546]: I0926 22:34:38.234136    1546 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 26 22:34:38 addons-341571 kubelet[1546]: E0926 22:34:38.299801    1546 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758926078299581424  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 26 22:34:38 addons-341571 kubelet[1546]: E0926 22:34:38.299838    1546 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758926078299581424  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 26 22:34:48 addons-341571 kubelet[1546]: E0926 22:34:48.301825    1546 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758926088301589702  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 26 22:34:48 addons-341571 kubelet[1546]: E0926 22:34:48.301854    1546 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758926088301589702  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 26 22:34:58 addons-341571 kubelet[1546]: E0926 22:34:58.304418    1546 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758926098304144881  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 26 22:34:58 addons-341571 kubelet[1546]: E0926 22:34:58.304452    1546 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758926098304144881  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 26 22:35:08 addons-341571 kubelet[1546]: E0926 22:35:08.307029    1546 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758926108306761117  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 26 22:35:08 addons-341571 kubelet[1546]: E0926 22:35:08.307062    1546 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758926108306761117  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 26 22:35:18 addons-341571 kubelet[1546]: E0926 22:35:18.309810    1546 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758926118309539495  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 26 22:35:18 addons-341571 kubelet[1546]: E0926 22:35:18.309841    1546 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758926118309539495  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 26 22:35:18 addons-341571 kubelet[1546]: I0926 22:35:18.683837    1546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qzqg\" (UniqueName: \"kubernetes.io/projected/9aa32c1f-420b-4afc-8139-77b77e867acf-kube-api-access-2qzqg\") pod \"hello-world-app-5d498dc89-27wqf\" (UID: \"9aa32c1f-420b-4afc-8139-77b77e867acf\") " pod="default/hello-world-app-5d498dc89-27wqf"
	Sep 26 22:35:20 addons-341571 kubelet[1546]: I0926 22:35:20.239218    1546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-27wqf" podStartSLOduration=1.695083229 podStartE2EDuration="2.23919172s" podCreationTimestamp="2025-09-26 22:35:18 +0000 UTC" firstStartedPulling="2025-09-26 22:35:19.006942629 +0000 UTC m=+310.863736426" lastFinishedPulling="2025-09-26 22:35:19.55105111 +0000 UTC m=+311.407844917" observedRunningTime="2025-09-26 22:35:20.238681072 +0000 UTC m=+312.095474900" watchObservedRunningTime="2025-09-26 22:35:20.23919172 +0000 UTC m=+312.095985536"
	
	
	==> storage-provisioner [9efa64a534c1ae3dfa0719bd98f9a2e20a84d998d0e6fb2caba0b3d06b7ab65a] <==
	W0926 22:34:55.418577       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:34:57.421624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:34:57.427111       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:34:59.430514       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:34:59.434536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:35:01.437797       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:35:01.441647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:35:03.445011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:35:03.449130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:35:05.452642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:35:05.457923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:35:07.460867       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:35:07.464692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:35:09.467751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:35:09.472661       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:35:11.475947       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:35:11.481054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:35:13.484450       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:35:13.489597       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:35:15.492559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:35:15.496590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:35:17.499598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:35:17.504266       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:35:19.507788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:35:19.511969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-341571 -n addons-341571
helpers_test.go:269: (dbg) Run:  kubectl --context addons-341571 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-fxs8m ingress-nginx-admission-patch-mqbh9
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-341571 describe pod ingress-nginx-admission-create-fxs8m ingress-nginx-admission-patch-mqbh9
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-341571 describe pod ingress-nginx-admission-create-fxs8m ingress-nginx-admission-patch-mqbh9: exit status 1 (59.22395ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-fxs8m" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-mqbh9" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-341571 describe pod ingress-nginx-admission-create-fxs8m ingress-nginx-admission-patch-mqbh9: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-341571 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-341571 addons disable ingress-dns --alsologtostderr -v=1: (1.066384367s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-341571 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-341571 addons disable ingress --alsologtostderr -v=1: (7.669539683s)
--- FAIL: TestAddons/parallel/Ingress (156.83s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-383702 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-383702 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-383702 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-383702 --alsologtostderr -v=1] stderr:
I0926 22:39:08.386371  253684 out.go:360] Setting OutFile to fd 1 ...
I0926 22:39:08.387309  253684 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:39:08.387325  253684 out.go:374] Setting ErrFile to fd 2...
I0926 22:39:08.387331  253684 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:39:08.387522  253684 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-208519/.minikube/bin
I0926 22:39:08.387814  253684 mustload.go:65] Loading cluster: functional-383702
I0926 22:39:08.388228  253684 config.go:182] Loaded profile config "functional-383702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0926 22:39:08.388593  253684 cli_runner.go:164] Run: docker container inspect functional-383702 --format={{.State.Status}}
I0926 22:39:08.408346  253684 host.go:66] Checking if "functional-383702" exists ...
I0926 22:39:08.408626  253684 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0926 22:39:08.465449  253684 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-09-26 22:39:08.455067898 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0926 22:39:08.465568  253684 api_server.go:166] Checking apiserver status ...
I0926 22:39:08.465613  253684 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0926 22:39:08.465649  253684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-383702
I0926 22:39:08.484145  253684 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/functional-383702/id_rsa Username:docker}
I0926 22:39:08.588103  253684 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5410/cgroup
W0926 22:39:08.599480  253684 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5410/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I0926 22:39:08.599550  253684 ssh_runner.go:195] Run: ls
I0926 22:39:08.603612  253684 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I0926 22:39:08.607874  253684 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W0926 22:39:08.607943  253684 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I0926 22:39:08.608118  253684 config.go:182] Loaded profile config "functional-383702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0926 22:39:08.608143  253684 addons.go:69] Setting dashboard=true in profile "functional-383702"
I0926 22:39:08.608155  253684 addons.go:238] Setting addon dashboard=true in "functional-383702"
I0926 22:39:08.608184  253684 host.go:66] Checking if "functional-383702" exists ...
I0926 22:39:08.608488  253684 cli_runner.go:164] Run: docker container inspect functional-383702 --format={{.State.Status}}
I0926 22:39:08.627587  253684 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0926 22:39:08.628850  253684 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0926 22:39:08.630057  253684 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0926 22:39:08.630077  253684 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0926 22:39:08.630146  253684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-383702
I0926 22:39:08.648272  253684 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/functional-383702/id_rsa Username:docker}
I0926 22:39:08.757753  253684 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0926 22:39:08.757787  253684 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0926 22:39:08.777940  253684 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0926 22:39:08.777989  253684 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0926 22:39:08.799727  253684 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0926 22:39:08.799754  253684 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0926 22:39:08.819160  253684 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0926 22:39:08.819184  253684 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0926 22:39:08.838886  253684 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0926 22:39:08.838919  253684 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0926 22:39:08.859042  253684 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0926 22:39:08.859073  253684 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0926 22:39:08.878386  253684 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0926 22:39:08.878414  253684 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0926 22:39:08.896738  253684 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0926 22:39:08.896758  253684 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0926 22:39:08.914653  253684 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0926 22:39:08.914674  253684 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0926 22:39:08.933388  253684 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0926 22:39:09.407607  253684 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-383702 addons enable metrics-server

                                                
                                                
I0926 22:39:09.408642  253684 addons.go:201] Writing out "functional-383702" config to set dashboard=true...
W0926 22:39:09.408859  253684 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I0926 22:39:09.409588  253684 kapi.go:59] client config for functional-383702: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21642-208519/.minikube/profiles/functional-383702/client.crt", KeyFile:"/home/jenkins/minikube-integration/21642-208519/.minikube/profiles/functional-383702/client.key", CAFile:"/home/jenkins/minikube-integration/21642-208519/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f41c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0926 22:39:09.410051  253684 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I0926 22:39:09.410067  253684 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0926 22:39:09.410071  253684 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0926 22:39:09.410076  253684 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0926 22:39:09.410080  253684 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0926 22:39:09.417716  253684 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  77b6ae5b-a128-4d53-a721-bfbd2a5c3b0e 806 0 2025-09-26 22:39:09 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-09-26 22:39:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.106.4.188,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.106.4.188],IPFamilies:[IPv4],AllocateLoadBalancerN
odePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0926 22:39:09.417869  253684 out.go:285] * Launching proxy ...
* Launching proxy ...
I0926 22:39:09.417940  253684 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-383702 proxy --port 36195]
I0926 22:39:09.418216  253684 dashboard.go:157] Waiting for kubectl to output host:port ...
I0926 22:39:09.463422  253684 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W0926 22:39:09.463479  253684 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I0926 22:39:09.473226  253684 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c0093ed7-165f-4461-b14c-464c873a717a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:39:09 GMT]] Body:0xc00151ee80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000294640 TLS:<nil>}
I0926 22:39:09.473334  253684 retry.go:31] will retry after 68.247µs: Temporary Error: unexpected response code: 503
I0926 22:39:09.476716  253684 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7429043d-f6b3-4ccc-9d92-cc067dccfd2b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:39:09 GMT]] Body:0xc0015d2100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005b48c0 TLS:<nil>}
I0926 22:39:09.476761  253684 retry.go:31] will retry after 92.933µs: Temporary Error: unexpected response code: 503
I0926 22:39:09.480978  253684 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[abba25b3-edec-4fd5-af32-89a2ea23f44d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:39:09 GMT]] Body:0xc00151ef40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002948c0 TLS:<nil>}
I0926 22:39:09.481032  253684 retry.go:31] will retry after 304.898µs: Temporary Error: unexpected response code: 503
I0926 22:39:09.484418  253684 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b20361ab-924a-4474-85c4-1468a3fba328] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:39:09 GMT]] Body:0xc0015d2200 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005b4a00 TLS:<nil>}
I0926 22:39:09.484467  253684 retry.go:31] will retry after 397.28µs: Temporary Error: unexpected response code: 503
I0926 22:39:09.487653  253684 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0368e634-e411-4062-95be-247033004b00] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:39:09 GMT]] Body:0xc00151f040 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000294a00 TLS:<nil>}
I0926 22:39:09.487735  253684 retry.go:31] will retry after 581.292µs: Temporary Error: unexpected response code: 503
I0926 22:39:09.490910  253684 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3928d2c0-a9e1-41d8-ba2b-731f98ece8da] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:39:09 GMT]] Body:0xc001326c80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005b4b40 TLS:<nil>}
I0926 22:39:09.490965  253684 retry.go:31] will retry after 751.176µs: Temporary Error: unexpected response code: 503
I0926 22:39:09.494222  253684 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9677af93-5bb7-4c6e-afa3-7042b995f44e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:39:09 GMT]] Body:0xc00151f140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e6000 TLS:<nil>}
I0926 22:39:09.494257  253684 retry.go:31] will retry after 1.504751ms: Temporary Error: unexpected response code: 503
I0926 22:39:09.498312  253684 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b59f6555-4fe5-4b81-a051-03e17c18ca68] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:39:09 GMT]] Body:0xc0015d2340 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005b4c80 TLS:<nil>}
I0926 22:39:09.498360  253684 retry.go:31] will retry after 988.094µs: Temporary Error: unexpected response code: 503
I0926 22:39:09.501484  253684 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a8f54b37-e236-4804-ac23-cf2bb1393ec4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:39:09 GMT]] Body:0xc001326d80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000294c80 TLS:<nil>}
I0926 22:39:09.501519  253684 retry.go:31] will retry after 1.563068ms: Temporary Error: unexpected response code: 503
I0926 22:39:09.505878  253684 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[00347132-12e7-44e6-9355-639cfb44490b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:39:09 GMT]] Body:0xc00151f240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e6140 TLS:<nil>}
I0926 22:39:09.505912  253684 retry.go:31] will retry after 5.472091ms: Temporary Error: unexpected response code: 503
I0926 22:39:09.514172  253684 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d6cc8c37-a51d-44af-9fb3-abe99fbd71a2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:39:09 GMT]] Body:0xc0015d2440 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005b4dc0 TLS:<nil>}
I0926 22:39:09.514224  253684 retry.go:31] will retry after 8.496313ms: Temporary Error: unexpected response code: 503
I0926 22:39:09.525799  253684 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6215e8f6-483f-4e9e-8888-39b3f489605d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:39:09 GMT]] Body:0xc001326e80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000294dc0 TLS:<nil>}
I0926 22:39:09.525850  253684 retry.go:31] will retry after 7.831549ms: Temporary Error: unexpected response code: 503
I0926 22:39:09.536568  253684 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[78b556f7-0734-4833-ad42-b635f1560f1e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:39:09 GMT]] Body:0xc0015d2540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e6280 TLS:<nil>}
I0926 22:39:09.536630  253684 retry.go:31] will retry after 18.85065ms: Temporary Error: unexpected response code: 503
I0926 22:39:09.559096  253684 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[029826d6-b8d3-4933-8b2f-291c20bf9a53] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:39:09 GMT]] Body:0xc00151f340 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000295040 TLS:<nil>}
I0926 22:39:09.559173  253684 retry.go:31] will retry after 26.642278ms: Temporary Error: unexpected response code: 503
I0926 22:39:09.589604  253684 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6551553b-4a79-4dc3-928e-846f266a8693] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:39:09 GMT]] Body:0xc001326f80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005b4f00 TLS:<nil>}
I0926 22:39:09.589691  253684 retry.go:31] will retry after 19.210345ms: Temporary Error: unexpected response code: 503
I0926 22:39:09.612772  253684 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0a7da084-0cd5-4b10-89d0-d966654ec83f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:39:09 GMT]] Body:0xc00151f480 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e63c0 TLS:<nil>}
I0926 22:39:09.612847  253684 retry.go:31] will retry after 52.917573ms: Temporary Error: unexpected response code: 503
I0926 22:39:09.669036  253684 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c8b50eea-903e-453f-a421-80ae44d8ffea] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:39:09 GMT]] Body:0xc00151f540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005b5040 TLS:<nil>}
I0926 22:39:09.669123  253684 retry.go:31] will retry after 69.197123ms: Temporary Error: unexpected response code: 503
I0926 22:39:09.742692  253684 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[28515a42-af8c-4ba9-a6f0-f7b3faf33ea7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:39:09 GMT]] Body:0xc0015d2640 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005b5180 TLS:<nil>}
I0926 22:39:09.742779  253684 retry.go:31] will retry after 104.226823ms: Temporary Error: unexpected response code: 503
I0926 22:39:09.851477  253684 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4c4524bc-1b3e-425b-9b83-4d0260a3204b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:39:09 GMT]] Body:0xc001327080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000295180 TLS:<nil>}
I0926 22:39:09.851541  253684 retry.go:31] will retry after 152.576652ms: Temporary Error: unexpected response code: 503
I0926 22:39:10.008168  253684 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b412c247-6d67-4740-ae0c-26e6bc9c9160] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:39:10 GMT]] Body:0xc00151f640 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e6500 TLS:<nil>}
I0926 22:39:10.008244  253684 retry.go:31] will retry after 244.944069ms: Temporary Error: unexpected response code: 503
I0926 22:39:10.256756  253684 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[11fefcac-9a90-4b48-b724-36f9b14bb9ec] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:39:10 GMT]] Body:0xc0015d2740 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005b52c0 TLS:<nil>}
I0926 22:39:10.256844  253684 retry.go:31] will retry after 257.651622ms: Temporary Error: unexpected response code: 503
I0926 22:39:10.518532  253684 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bb772a82-58ef-4945-928b-dbaf6d44730a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:39:10 GMT]] Body:0xc0013271c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002952c0 TLS:<nil>}
I0926 22:39:10.518602  253684 retry.go:31] will retry after 536.995072ms: Temporary Error: unexpected response code: 503
I0926 22:39:11.059238  253684 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[23943dad-e236-408f-bb53-729e4239b184] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:39:11 GMT]] Body:0xc00151f700 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e6640 TLS:<nil>}
I0926 22:39:11.059318  253684 retry.go:31] will retry after 503.635592ms: Temporary Error: unexpected response code: 503
I0926 22:39:11.567150  253684 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[34e9c040-e76d-4f7b-bacf-d0eacf655611] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:39:11 GMT]] Body:0xc0013272c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005b5400 TLS:<nil>}
I0926 22:39:11.567234  253684 retry.go:31] will retry after 1.164491927s: Temporary Error: unexpected response code: 503
I0926 22:39:12.735556  253684 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1bfffca4-c411-4059-ad31-3001cc73597b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:39:12 GMT]] Body:0xc00151f800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e6780 TLS:<nil>}
I0926 22:39:12.735620  253684 retry.go:31] will retry after 883.558065ms: Temporary Error: unexpected response code: 503
I0926 22:39:13.623041  253684 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[dc65bf61-786a-4332-84c0-802e1b31c95f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:39:13 GMT]] Body:0xc0015d2840 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005b5540 TLS:<nil>}
I0926 22:39:13.623119  253684 retry.go:31] will retry after 3.631530889s: Temporary Error: unexpected response code: 503
I0926 22:39:17.261559  253684 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[df5eb650-d683-418d-9f74-1060dc9dac60] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:39:17 GMT]] Body:0xc0015d2900 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000295400 TLS:<nil>}
I0926 22:39:17.261623  253684 retry.go:31] will retry after 3.555883814s: Temporary Error: unexpected response code: 503
I0926 22:39:20.821778  253684 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[63441058-f6b6-474c-9b7a-97910bb07eb0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:39:20 GMT]] Body:0xc00151f940 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005b5680 TLS:<nil>}
I0926 22:39:20.821860  253684 retry.go:31] will retry after 6.044986743s: Temporary Error: unexpected response code: 503
I0926 22:39:26.873161  253684 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7b25b95f-de7d-45e5-8d7f-3197fabcbcc8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:39:26 GMT]] Body:0xc0015d2980 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005b57c0 TLS:<nil>}
I0926 22:39:26.873238  253684 retry.go:31] will retry after 9.040166173s: Temporary Error: unexpected response code: 503
I0926 22:39:35.919055  253684 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1d66e71e-bf79-453f-947d-4a3c8221ea9b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:39:35 GMT]] Body:0xc001327440 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000295540 TLS:<nil>}
I0926 22:39:35.919143  253684 retry.go:31] will retry after 15.785170152s: Temporary Error: unexpected response code: 503
I0926 22:39:51.709221  253684 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f52fa582-c2c6-4f41-82d9-89f75af2a4df] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:39:51 GMT]] Body:0xc0015d2a80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e68c0 TLS:<nil>}
I0926 22:39:51.709305  253684 retry.go:31] will retry after 28.543616976s: Temporary Error: unexpected response code: 503
I0926 22:40:20.256444  253684 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b7df8da1-d3aa-425f-8941-108f25c08a4b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:40:20 GMT]] Body:0xc00151fa40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000295e00 TLS:<nil>}
I0926 22:40:20.256522  253684 retry.go:31] will retry after 26.031873582s: Temporary Error: unexpected response code: 503
I0926 22:40:46.294418  253684 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1694e8f3-ef2a-42c2-9a7b-02f90d402d64] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:40:46 GMT]] Body:0xc001327540 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005b5b80 TLS:<nil>}
I0926 22:40:46.294488  253684 retry.go:31] will retry after 56.686588984s: Temporary Error: unexpected response code: 503
I0926 22:41:42.984519  253684 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f3afcac2-9765-414e-bbcd-0153ee321474] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:41:42 GMT]] Body:0xc00151e040 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e6a00 TLS:<nil>}
I0926 22:41:42.984606  253684 retry.go:31] will retry after 1m8.822687486s: Temporary Error: unexpected response code: 503
I0926 22:42:51.813143  253684 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3f81616a-57ff-4a0e-a496-78185021742f] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:42:51 GMT]] Body:0xc00151e0c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e6b40 TLS:<nil>}
I0926 22:42:51.813226  253684 retry.go:31] will retry after 1m13.466593381s: Temporary Error: unexpected response code: 503
I0926 22:44:05.283911  253684 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fe1aa4d7-0802-423e-bb58-978f300e5a89] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Fri, 26 Sep 2025 22:44:05 GMT]] Body:0xc001326100 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e6c80 TLS:<nil>}
I0926 22:44:05.284008  253684 retry.go:31] will retry after 44.199240955s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-383702
helpers_test.go:243: (dbg) docker inspect functional-383702:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "18074625eacea6bd2110e394c4e87daf31d042212bb2e896e77fcbd8c48a5efb",
	        "Created": "2025-09-26T22:36:40.056518629Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 238674,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-26T22:36:40.09457273Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/18074625eacea6bd2110e394c4e87daf31d042212bb2e896e77fcbd8c48a5efb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/18074625eacea6bd2110e394c4e87daf31d042212bb2e896e77fcbd8c48a5efb/hostname",
	        "HostsPath": "/var/lib/docker/containers/18074625eacea6bd2110e394c4e87daf31d042212bb2e896e77fcbd8c48a5efb/hosts",
	        "LogPath": "/var/lib/docker/containers/18074625eacea6bd2110e394c4e87daf31d042212bb2e896e77fcbd8c48a5efb/18074625eacea6bd2110e394c4e87daf31d042212bb2e896e77fcbd8c48a5efb-json.log",
	        "Name": "/functional-383702",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-383702:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-383702",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "18074625eacea6bd2110e394c4e87daf31d042212bb2e896e77fcbd8c48a5efb",
	                "LowerDir": "/var/lib/docker/overlay2/eb641d2ed975a407ebbcad1d4b5f8e78e77340e71ee29b5a53ef16b9b46e1a21-init/diff:/var/lib/docker/overlay2/539bc53ffef0f27e9ac4c376a14359e91e4b4c4b56d5675ca6caeaaab94e33fb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eb641d2ed975a407ebbcad1d4b5f8e78e77340e71ee29b5a53ef16b9b46e1a21/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eb641d2ed975a407ebbcad1d4b5f8e78e77340e71ee29b5a53ef16b9b46e1a21/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eb641d2ed975a407ebbcad1d4b5f8e78e77340e71ee29b5a53ef16b9b46e1a21/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-383702",
	                "Source": "/var/lib/docker/volumes/functional-383702/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-383702",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-383702",
	                "name.minikube.sigs.k8s.io": "functional-383702",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c14bb709d6a4da527cce048680ec7ef48cd8e5ac85d535e44da14f1b9772750c",
	            "SandboxKey": "/var/run/docker/netns/c14bb709d6a4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-383702": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:79:04:17:6b:6c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4a83464f985ef865e1ec8346088ff1329167c2e12fc9c4cd85e0e75b2304af91",
	                    "EndpointID": "8043ba5696f833472ccf69672aa395e1b32ba8046195d7cd544a87833268183f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-383702",
	                        "18074625eace"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-383702 -n functional-383702
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-383702 logs -n 25: (1.436492064s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh       │ functional-383702 ssh sudo umount -f /mount-9p                                                                                                                  │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │                     │
	│ ssh       │ functional-383702 ssh findmnt -T /mount1                                                                                                                        │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │                     │
	│ mount     │ -p functional-383702 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2939563306/001:/mount2 --alsologtostderr -v=1                                              │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │                     │
	│ mount     │ -p functional-383702 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2939563306/001:/mount3 --alsologtostderr -v=1                                              │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │                     │
	│ mount     │ -p functional-383702 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2939563306/001:/mount1 --alsologtostderr -v=1                                              │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │                     │
	│ start     │ -p functional-383702 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                                                       │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │                     │
	│ ssh       │ functional-383702 ssh findmnt -T /mount1                                                                                                                        │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │ 26 Sep 25 22:39 UTC │
	│ start     │ -p functional-383702 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                 │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │                     │
	│ start     │ -p functional-383702 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                                                       │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │                     │
	│ ssh       │ functional-383702 ssh findmnt -T /mount2                                                                                                                        │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │ 26 Sep 25 22:39 UTC │
	│ dashboard │ --url --port 36195 -p functional-383702 --alsologtostderr -v=1                                                                                                  │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │                     │
	│ ssh       │ functional-383702 ssh findmnt -T /mount3                                                                                                                        │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │ 26 Sep 25 22:39 UTC │
	│ mount     │ -p functional-383702 --kill=true                                                                                                                                │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │                     │
	│ image     │ functional-383702 image load --daemon kicbase/echo-server:functional-383702 --alsologtostderr                                                                   │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │ 26 Sep 25 22:39 UTC │
	│ image     │ functional-383702 image ls                                                                                                                                      │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │ 26 Sep 25 22:39 UTC │
	│ image     │ functional-383702 image load --daemon kicbase/echo-server:functional-383702 --alsologtostderr                                                                   │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │ 26 Sep 25 22:39 UTC │
	│ image     │ functional-383702 image ls                                                                                                                                      │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │ 26 Sep 25 22:39 UTC │
	│ image     │ functional-383702 image load --daemon kicbase/echo-server:functional-383702 --alsologtostderr                                                                   │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │ 26 Sep 25 22:39 UTC │
	│ image     │ functional-383702 image ls                                                                                                                                      │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │ 26 Sep 25 22:39 UTC │
	│ image     │ functional-383702 image save kicbase/echo-server:functional-383702 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │ 26 Sep 25 22:39 UTC │
	│ image     │ functional-383702 image rm kicbase/echo-server:functional-383702 --alsologtostderr                                                                              │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │ 26 Sep 25 22:39 UTC │
	│ image     │ functional-383702 image ls                                                                                                                                      │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │ 26 Sep 25 22:39 UTC │
	│ image     │ functional-383702 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │ 26 Sep 25 22:39 UTC │
	│ image     │ functional-383702 image ls                                                                                                                                      │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │ 26 Sep 25 22:39 UTC │
	│ image     │ functional-383702 image save --daemon kicbase/echo-server:functional-383702 --alsologtostderr                                                                   │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │ 26 Sep 25 22:39 UTC │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/26 22:39:08
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 22:39:08.213339  253530 out.go:360] Setting OutFile to fd 1 ...
	I0926 22:39:08.213599  253530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:39:08.213609  253530 out.go:374] Setting ErrFile to fd 2...
	I0926 22:39:08.213614  253530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:39:08.213934  253530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-208519/.minikube/bin
	I0926 22:39:08.214449  253530 out.go:368] Setting JSON to false
	I0926 22:39:08.215621  253530 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":8497,"bootTime":1758917851,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 22:39:08.215714  253530 start.go:140] virtualization: kvm guest
	I0926 22:39:08.217535  253530 out.go:179] * [functional-383702] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0926 22:39:08.219219  253530 out.go:179]   - MINIKUBE_LOCATION=21642
	I0926 22:39:08.219220  253530 notify.go:220] Checking for updates...
	I0926 22:39:08.220685  253530 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 22:39:08.222326  253530 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21642-208519/kubeconfig
	I0926 22:39:08.223663  253530 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-208519/.minikube
	I0926 22:39:08.224967  253530 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0926 22:39:08.226240  253530 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 22:39:08.227804  253530 config.go:182] Loaded profile config "functional-383702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 22:39:08.228421  253530 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 22:39:08.256215  253530 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0926 22:39:08.256361  253530 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 22:39:08.320575  253530 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-09-26 22:39:08.309855559 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 22:39:08.320680  253530 docker.go:318] overlay module found
	I0926 22:39:08.323360  253530 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0926 22:39:08.324648  253530 start.go:304] selected driver: docker
	I0926 22:39:08.324684  253530 start.go:924] validating driver "docker" against &{Name:functional-383702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-383702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 22:39:08.324792  253530 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 22:39:08.326812  253530 out.go:203] 
	W0926 22:39:08.329163  253530 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0926 22:39:08.330497  253530 out.go:203] 
	
	
	==> CRI-O <==
	Sep 26 22:42:24 functional-383702 crio[4238]: time="2025-09-26 22:42:24.349177768Z" level=info msg="Trying to access \"docker.io/library/mysql:5.7\""
	Sep 26 22:42:36 functional-383702 crio[4238]: time="2025-09-26 22:42:36.391780717Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=8aa29b31-9644-4b14-9396-51749078c8fc name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:42:36 functional-383702 crio[4238]: time="2025-09-26 22:42:36.392145496Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=8aa29b31-9644-4b14-9396-51749078c8fc name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:42:48 functional-383702 crio[4238]: time="2025-09-26 22:42:48.392642856Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=2266637e-b11d-4038-acdb-9478866e86bf name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:42:48 functional-383702 crio[4238]: time="2025-09-26 22:42:48.393027072Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=2266637e-b11d-4038-acdb-9478866e86bf name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:42:54 functional-383702 crio[4238]: time="2025-09-26 22:42:54.431867634Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=0b485469-a1ee-4339-b301-4b739e119eb7 name=/runtime.v1.ImageService/PullImage
	Sep 26 22:42:54 functional-383702 crio[4238]: time="2025-09-26 22:42:54.432631827Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=eae0c8d3-970f-438c-8e46-dd435b0e9462 name=/runtime.v1.ImageService/PullImage
	Sep 26 22:42:54 functional-383702 crio[4238]: time="2025-09-26 22:42:54.433371903Z" level=info msg="Pulling image: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=fdfda8b9-1053-4fce-b17c-d3c2d518946c name=/runtime.v1.ImageService/PullImage
	Sep 26 22:42:54 functional-383702 crio[4238]: time="2025-09-26 22:42:54.437503744Z" level=info msg="Trying to access \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Sep 26 22:43:05 functional-383702 crio[4238]: time="2025-09-26 22:43:05.392384374Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=56f10441-879f-4f13-82e0-6111f7bc701e name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:43:05 functional-383702 crio[4238]: time="2025-09-26 22:43:05.392728993Z" level=info msg="Image docker.io/mysql:5.7 not found" id=56f10441-879f-4f13-82e0-6111f7bc701e name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:43:16 functional-383702 crio[4238]: time="2025-09-26 22:43:16.392462356Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=f5ecf0ea-ddb1-44c7-8e86-ffa2f6730158 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:43:16 functional-383702 crio[4238]: time="2025-09-26 22:43:16.392757450Z" level=info msg="Image docker.io/mysql:5.7 not found" id=f5ecf0ea-ddb1-44c7-8e86-ffa2f6730158 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:43:24 functional-383702 crio[4238]: time="2025-09-26 22:43:24.533149728Z" level=info msg="Pulling image: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=0d00dd51-6dbb-48bd-bed5-d9ec3f35e6a9 name=/runtime.v1.ImageService/PullImage
	Sep 26 22:43:24 functional-383702 crio[4238]: time="2025-09-26 22:43:24.537684641Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 26 22:43:38 functional-383702 crio[4238]: time="2025-09-26 22:43:38.392637711Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=a5b9e8bf-19f6-4f3b-ad47-c89b737a23e0 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:43:38 functional-383702 crio[4238]: time="2025-09-26 22:43:38.393081657Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=a5b9e8bf-19f6-4f3b-ad47-c89b737a23e0 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:43:51 functional-383702 crio[4238]: time="2025-09-26 22:43:51.395420449Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=c8701975-b284-4d21-a5f3-21d2a4323295 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:43:51 functional-383702 crio[4238]: time="2025-09-26 22:43:51.395795960Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=c8701975-b284-4d21-a5f3-21d2a4323295 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:43:54 functional-383702 crio[4238]: time="2025-09-26 22:43:54.634106959Z" level=info msg="Pulling image: docker.io/mysql:5.7" id=4a681d6c-017f-41b2-8bfa-e43189e89519 name=/runtime.v1.ImageService/PullImage
	Sep 26 22:43:54 functional-383702 crio[4238]: time="2025-09-26 22:43:54.637795207Z" level=info msg="Trying to access \"docker.io/library/mysql:5.7\""
	Sep 26 22:44:05 functional-383702 crio[4238]: time="2025-09-26 22:44:05.392579132Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=90b9bb97-2d34-4a48-8c60-50ba6937cd89 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:44:05 functional-383702 crio[4238]: time="2025-09-26 22:44:05.392589127Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=9c4403f9-909a-4810-ab0a-2ff949a6cf49 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:44:05 functional-383702 crio[4238]: time="2025-09-26 22:44:05.392866475Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=90b9bb97-2d34-4a48-8c60-50ba6937cd89 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:44:05 functional-383702 crio[4238]: time="2025-09-26 22:44:05.392999529Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=9c4403f9-909a-4810-ab0a-2ff949a6cf49 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4873b5fd9c1d2       docker.io/library/nginx@sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285       5 minutes ago       Running             myfrontend                0                   35232540f4a39       sp-pod
	92da9787b27c8       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   5 minutes ago       Exited              mount-munger              0                   986415bda1b4d       busybox-mount
	c7f3fb2ed6c31       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8       5 minutes ago       Running             nginx                     0                   8607d64b8e65b       nginx-svc
	f7052d19e3972       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       2                   33055bf5bc514       storage-provisioner
	0f1d90fff1994       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                      5 minutes ago       Running             kube-apiserver            0                   817512b32d33d       kube-apiserver-functional-383702
	9aad9441ea24b       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      5 minutes ago       Running             etcd                      1                   5a1d3696d5e69       etcd-functional-383702
	2f8f3416d803c       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      5 minutes ago       Running             kube-controller-manager   2                   6e3b19ba6bd5a       kube-controller-manager-functional-383702
	d002f125363d7       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      6 minutes ago       Exited              kube-controller-manager   1                   6e3b19ba6bd5a       kube-controller-manager-functional-383702
	f2b96981f3cea       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      6 minutes ago       Running             kube-scheduler            1                   e67e6055c8b97       kube-scheduler-functional-383702
	25c8780bc9df0       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      6 minutes ago       Running             kindnet-cni               1                   1bd0dce351379       kindnet-h9qvl
	71d7d7d7cb585       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Exited              storage-provisioner       1                   33055bf5bc514       storage-provisioner
	649d32bc054df       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      6 minutes ago       Running             kube-proxy                1                   63ca8b4fd4bec       kube-proxy-27n4x
	8515d054eecd5       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      6 minutes ago       Running             coredns                   1                   189b543c8cd5c       coredns-66bc5c9577-sxzwb
	a720d09796fe8       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      6 minutes ago       Exited              coredns                   0                   189b543c8cd5c       coredns-66bc5c9577-sxzwb
	531d4b0a6adad       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      7 minutes ago       Exited              kindnet-cni               0                   1bd0dce351379       kindnet-h9qvl
	85c3ffe817ca8       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      7 minutes ago       Exited              kube-proxy                0                   63ca8b4fd4bec       kube-proxy-27n4x
	db73bb67b2a2d       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      7 minutes ago       Exited              kube-scheduler            0                   e67e6055c8b97       kube-scheduler-functional-383702
	f90cfaf912f69       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      7 minutes ago       Exited              etcd                      0                   5a1d3696d5e69       etcd-functional-383702
	
	
	==> coredns [8515d054eecd5a444f86dd4f43d164940d668d155f81dc6c68bb9d234a92876d] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46095 - 27369 "HINFO IN 2769917989759994095.5307631164563384989. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.021713353s
	
	
	==> coredns [a720d09796fe8c6300b07136f5a321c333362dd5a3c25385c7ee30aaf1d7ed90] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55613 - 20235 "HINFO IN 7109503854822832070.3156555241200074520. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.013759315s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-383702
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-383702
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47
	                    minikube.k8s.io/name=functional-383702
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_26T22_36_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 26 Sep 2025 22:36:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-383702
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 26 Sep 2025 22:44:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 26 Sep 2025 22:42:06 +0000   Fri, 26 Sep 2025 22:36:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 26 Sep 2025 22:42:06 +0000   Fri, 26 Sep 2025 22:36:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 26 Sep 2025 22:42:06 +0000   Fri, 26 Sep 2025 22:36:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 26 Sep 2025 22:42:06 +0000   Fri, 26 Sep 2025 22:37:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-383702
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 a0a02ced94d640b981affd2bc93c81c4
	  System UUID:                f593ccff-392d-4c4d-a0b7-5fd374fb4177
	  Boot ID:                    ce94279f-6745-4217-952d-f9fda1755c22
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-np6td                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m26s
	  default                     hello-node-connect-7d85dfc575-vmzsk           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	  default                     mysql-5bb876957f-g2lbw                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     4m55s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m9s
	  kube-system                 coredns-66bc5c9577-sxzwb                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m9s
	  kube-system                 etcd-functional-383702                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m14s
	  kube-system                 kindnet-h9qvl                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m9s
	  kube-system                 kube-apiserver-functional-383702              250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m46s
	  kube-system                 kube-controller-manager-functional-383702     200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m14s
	  kube-system                 kube-proxy-27n4x                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m9s
	  kube-system                 kube-scheduler-functional-383702              100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m14s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m8s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-srn29    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-gqgkx         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m8s                   kube-proxy       
	  Normal  Starting                 5m45s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  7m19s (x8 over 7m19s)  kubelet          Node functional-383702 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m19s (x8 over 7m19s)  kubelet          Node functional-383702 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m19s (x8 over 7m19s)  kubelet          Node functional-383702 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m15s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     7m14s                  kubelet          Node functional-383702 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    7m14s                  kubelet          Node functional-383702 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  7m14s                  kubelet          Node functional-383702 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           7m10s                  node-controller  Node functional-383702 event: Registered Node functional-383702 in Controller
	  Normal  NodeReady                6m28s                  kubelet          Node functional-383702 status is now: NodeReady
	  Normal  Starting                 5m49s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m49s (x8 over 5m49s)  kubelet          Node functional-383702 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m49s (x8 over 5m49s)  kubelet          Node functional-383702 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m49s (x8 over 5m49s)  kubelet          Node functional-383702 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m43s                  node-controller  Node functional-383702 event: Registered Node functional-383702 in Controller
	
	
	==> dmesg <==
	[  +0.088607] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025515] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.894785] kauditd_printk_skb: 47 callbacks suppressed
	[Sep26 22:33] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5e 41 1c 6c 94 13 96 ca 2b d2 b8 98 08 00
	[  +1.003220] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 5e 41 1c 6c 94 13 96 ca 2b d2 b8 98 08 00
	[  +1.023896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 5e 41 1c 6c 94 13 96 ca 2b d2 b8 98 08 00
	[  +1.023850] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 5e 41 1c 6c 94 13 96 ca 2b d2 b8 98 08 00
	[  +1.023878] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 5e 41 1c 6c 94 13 96 ca 2b d2 b8 98 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 5e 41 1c 6c 94 13 96 ca 2b d2 b8 98 08 00
	[  +2.048746] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5e 41 1c 6c 94 13 96 ca 2b d2 b8 98 08 00
	[  +4.030628] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 5e 41 1c 6c 94 13 96 ca 2b d2 b8 98 08 00
	[  +8.319153] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 5e 41 1c 6c 94 13 96 ca 2b d2 b8 98 08 00
	[ +16.382271] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5e 41 1c 6c 94 13 96 ca 2b d2 b8 98 08 00
	[Sep26 22:34] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000033] ll header: 00000000: 5e 41 1c 6c 94 13 96 ca 2b d2 b8 98 08 00
	
	
	==> etcd [9aad9441ea24b0821d0b27d2b6f00a7097cfadb6fe6a12eef6ed624fbdd9b988] <==
	{"level":"warn","ts":"2025-09-26T22:38:22.156032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.164331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.170537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.176687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.182665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.189770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.196105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.202297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.209041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.216245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.222734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.228820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.236172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.242527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.249150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.256372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.262550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.268607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.275673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.282122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.288232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.305592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.311924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.320124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.365195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38824","server-name":"","error":"EOF"}
	
	
	==> etcd [f90cfaf912f6987b18aac3393fb4cda3e0e222a40622257ee440fb60cd895054] <==
	{"level":"warn","ts":"2025-09-26T22:36:51.915127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:51.921316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:51.928043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:51.945530Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:51.951934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:51.958381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:52.006133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54854","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-26T22:38:18.020980Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-26T22:38:18.021074Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-383702","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-26T22:38:18.021184Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-26T22:38:18.022756Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-26T22:38:18.022817Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-26T22:38:18.022838Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-09-26T22:38:18.022889Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-26T22:38:18.022891Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-26T22:38:18.022962Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-26T22:38:18.023012Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-26T22:38:18.023020Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-26T22:38:18.022953Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-26T22:38:18.023043Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-26T22:38:18.023050Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-26T22:38:18.024946Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-26T22:38:18.025013Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-26T22:38:18.025051Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-26T22:38:18.025062Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-383702","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 22:44:09 up  2:26,  0 users,  load average: 0.21, 3.85, 28.19
	Linux functional-383702 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [25c8780bc9df0394418a978d4dcd37a1fc9e8a43e3e0f927e81a42e3af478801] <==
	I0926 22:42:08.645155       1 main.go:301] handling current node
	I0926 22:42:18.652692       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:42:18.652741       1 main.go:301] handling current node
	I0926 22:42:28.644409       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:42:28.644474       1 main.go:301] handling current node
	I0926 22:42:38.644941       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:42:38.644998       1 main.go:301] handling current node
	I0926 22:42:48.644556       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:42:48.644613       1 main.go:301] handling current node
	I0926 22:42:58.645654       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:42:58.645688       1 main.go:301] handling current node
	I0926 22:43:08.645746       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:43:08.645795       1 main.go:301] handling current node
	I0926 22:43:18.644594       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:43:18.644633       1 main.go:301] handling current node
	I0926 22:43:28.644939       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:43:28.644981       1 main.go:301] handling current node
	I0926 22:43:38.644773       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:43:38.644811       1 main.go:301] handling current node
	I0926 22:43:48.652866       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:43:48.652901       1 main.go:301] handling current node
	I0926 22:43:58.645164       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:43:58.645201       1 main.go:301] handling current node
	I0926 22:44:08.644243       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:44:08.644276       1 main.go:301] handling current node
	
	
	==> kindnet [531d4b0a6adad39d0c664b36894d865492c0c437bd84c8b98e737b8bc27b4ff6] <==
	I0926 22:37:01.223127       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0926 22:37:01.223432       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0926 22:37:01.223573       1 main.go:148] setting mtu 1500 for CNI 
	I0926 22:37:01.223592       1 main.go:178] kindnetd IP family: "ipv4"
	I0926 22:37:01.223622       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-26T22:37:01Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0926 22:37:01.428167       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0926 22:37:01.428250       1 controller.go:381] "Waiting for informer caches to sync"
	I0926 22:37:01.428668       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0926 22:37:01.429146       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0926 22:37:31.429573       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E0926 22:37:31.429578       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E0926 22:37:31.429577       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E0926 22:37:31.429631       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I0926 22:37:32.829571       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0926 22:37:32.829605       1 metrics.go:72] Registering metrics
	I0926 22:37:32.829662       1 controller.go:711] "Syncing nftables rules"
	I0926 22:37:41.436190       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:37:41.436257       1 main.go:301] handling current node
	I0926 22:37:51.435172       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:37:51.435218       1 main.go:301] handling current node
	I0926 22:38:01.432631       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:38:01.432669       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0f1d90fff19941eda0ac9f0e8c915241a757b6db2dcaf4db40398d4640877683] <==
	I0926 22:38:24.029936       1 controller.go:667] quota admission added evaluator for: endpoints
	I0926 22:38:24.034528       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0926 22:38:24.272974       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0926 22:38:24.373877       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0926 22:38:24.423691       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0926 22:38:24.430182       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0926 22:38:26.154463       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0926 22:38:39.139038       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.99.181.55"}
	I0926 22:38:43.686841       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.81.33"}
	I0926 22:38:45.106292       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.98.198.196"}
	I0926 22:38:45.591413       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.101.254.0"}
	E0926 22:38:59.806926       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:50086: use of closed network connection
	E0926 22:39:07.782709       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:42046: use of closed network connection
	I0926 22:39:09.264496       1 controller.go:667] quota admission added evaluator for: namespaces
	I0926 22:39:09.389245       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.4.188"}
	I0926 22:39:09.400137       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.0.7"}
	I0926 22:39:14.899740       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.111.36.26"}
	I0926 22:39:27.279947       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:39:40.167657       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:40:41.816993       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:40:54.178554       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:41:56.347046       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:42:11.725080       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:43:20.281575       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:43:28.063892       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [2f8f3416d803c18dee54a93529d5dfcf746f63352015dd5c8f9cc13d2fc5c6f1] <==
	I0926 22:38:26.145530       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0926 22:38:26.145566       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0926 22:38:26.147756       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0926 22:38:26.147787       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0926 22:38:26.148937       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0926 22:38:26.148966       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0926 22:38:26.149033       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0926 22:38:26.149047       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0926 22:38:26.149057       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0926 22:38:26.149115       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0926 22:38:26.149158       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0926 22:38:26.149508       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0926 22:38:26.149516       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0926 22:38:26.152521       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0926 22:38:26.153763       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0926 22:38:26.153764       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0926 22:38:26.157054       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0926 22:38:26.171128       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0926 22:38:26.186442       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0926 22:39:09.323398       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:39:09.327773       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:39:09.331324       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:39:09.334272       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:39:09.334831       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:39:09.339730       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [d002f125363d7cabaab809253fe8b16078d6f0a4a8a2cefc0f977363ea283a0c] <==
	I0926 22:38:09.246246       1 serving.go:386] Generated self-signed cert in-memory
	I0926 22:38:09.798602       1 controllermanager.go:191] "Starting" version="v1.34.0"
	I0926 22:38:09.798624       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:38:09.799874       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0926 22:38:09.799876       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0926 22:38:09.800159       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0926 22:38:09.800188       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0926 22:38:19.801762       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-proxy [649d32bc054dfc495e951045c142935673ec2afcf84fe1b7ac108730602f4073] <==
	I0926 22:38:08.407573       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0926 22:38:08.408843       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-383702&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:38:09.749483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-383702&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:38:11.860278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-383702&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:38:15.529555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-383702&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I0926 22:38:23.607794       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0926 22:38:23.607841       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0926 22:38:23.607947       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0926 22:38:23.627021       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0926 22:38:23.627103       1 server_linux.go:132] "Using iptables Proxier"
	I0926 22:38:23.632516       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0926 22:38:23.632955       1 server.go:527] "Version info" version="v1.34.0"
	I0926 22:38:23.632995       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:38:23.634342       1 config.go:403] "Starting serviceCIDR config controller"
	I0926 22:38:23.634364       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0926 22:38:23.634477       1 config.go:200] "Starting service config controller"
	I0926 22:38:23.634490       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0926 22:38:23.634487       1 config.go:309] "Starting node config controller"
	I0926 22:38:23.634502       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0926 22:38:23.634507       1 config.go:106] "Starting endpoint slice config controller"
	I0926 22:38:23.634511       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0926 22:38:23.634512       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0926 22:38:23.735461       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0926 22:38:23.735539       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0926 22:38:23.735488       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [85c3ffe817ca877caa2254a21d5dd25f802610ae24d4f2564968a7fef018106a] <==
	I0926 22:37:01.068434       1 server_linux.go:53] "Using iptables proxy"
	I0926 22:37:01.136995       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0926 22:37:01.237563       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0926 22:37:01.237606       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0926 22:37:01.237688       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0926 22:37:01.256792       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0926 22:37:01.256866       1 server_linux.go:132] "Using iptables Proxier"
	I0926 22:37:01.262250       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0926 22:37:01.262677       1 server.go:527] "Version info" version="v1.34.0"
	I0926 22:37:01.262733       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:37:01.263902       1 config.go:200] "Starting service config controller"
	I0926 22:37:01.263923       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0926 22:37:01.263948       1 config.go:403] "Starting serviceCIDR config controller"
	I0926 22:37:01.263974       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0926 22:37:01.263977       1 config.go:106] "Starting endpoint slice config controller"
	I0926 22:37:01.263984       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0926 22:37:01.264010       1 config.go:309] "Starting node config controller"
	I0926 22:37:01.264041       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0926 22:37:01.365066       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0926 22:37:01.365069       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0926 22:37:01.365120       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0926 22:37:01.365162       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [db73bb67b2a2d2365e200487e84eccc01f234727653f3c7874c52237af5df7da] <==
	E0926 22:36:52.625973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0926 22:36:52.626230       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0926 22:36:52.626667       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0926 22:36:52.626732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:36:52.626914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0926 22:36:52.626924       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0926 22:36:52.627021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0926 22:36:52.627041       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0926 22:36:52.627104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0926 22:36:52.627167       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0926 22:36:52.627355       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0926 22:36:52.627929       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0926 22:36:53.434654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0926 22:36:53.434658       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0926 22:36:53.464130       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0926 22:36:53.532612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0926 22:36:53.599951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0926 22:36:53.604071       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I0926 22:36:54.122224       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 22:38:07.736943       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 22:38:07.737060       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0926 22:38:07.737233       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0926 22:38:07.737328       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0926 22:38:07.737340       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0926 22:38:07.737367       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f2b96981f3ceaa989bd478a919e9a70994001f7bc68ddea7326c32df7f23c4e5] <==
	E0926 22:38:13.250399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0926 22:38:13.396218       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0926 22:38:13.541353       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0926 22:38:13.800508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0926 22:38:13.921958       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:38:15.534678       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0926 22:38:16.144669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0926 22:38:16.468679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0926 22:38:16.491112       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0926 22:38:16.514809       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0926 22:38:16.967225       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0926 22:38:17.016006       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0926 22:38:17.423400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0926 22:38:17.522224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0926 22:38:17.801647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0926 22:38:17.805313       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0926 22:38:17.818178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0926 22:38:18.341101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0926 22:38:18.393856       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0926 22:38:18.477682       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0926 22:38:18.626764       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0926 22:38:18.770550       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:38:18.886743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0926 22:38:19.557699       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I0926 22:38:24.811408       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 26 22:43:24 functional-383702 kubelet[5308]: E0926 22:43:24.532661    5308 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 26 22:43:24 functional-383702 kubelet[5308]: E0926 22:43:24.532903    5308 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-srn29_kubernetes-dashboard(e65d8d59-9833-4c81-a988-0d16ac0b13b4): ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 26 22:43:24 functional-383702 kubelet[5308]: E0926 22:43:24.532969    5308 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-srn29" podUID="e65d8d59-9833-4c81-a988-0d16ac0b13b4"
	Sep 26 22:43:30 functional-383702 kubelet[5308]: E0926 22:43:30.449202    5308 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758926610448929857  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:237536}  inodes_used:{value:100}}"
	Sep 26 22:43:30 functional-383702 kubelet[5308]: E0926 22:43:30.449246    5308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758926610448929857  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:237536}  inodes_used:{value:100}}"
	Sep 26 22:43:32 functional-383702 kubelet[5308]: E0926 22:43:32.391657    5308 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-vmzsk" podUID="740f15b9-277e-4139-b64b-8d2c055cafd5"
	Sep 26 22:43:34 functional-383702 kubelet[5308]: E0926 22:43:34.391645    5308 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-np6td" podUID="c75bf40e-9784-4212-8c9b-bea5b99acfeb"
	Sep 26 22:43:38 functional-383702 kubelet[5308]: E0926 22:43:38.393480    5308 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-srn29" podUID="e65d8d59-9833-4c81-a988-0d16ac0b13b4"
	Sep 26 22:43:40 functional-383702 kubelet[5308]: E0926 22:43:40.450600    5308 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758926620450374031  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:237536}  inodes_used:{value:100}}"
	Sep 26 22:43:40 functional-383702 kubelet[5308]: E0926 22:43:40.450642    5308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758926620450374031  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:237536}  inodes_used:{value:100}}"
	Sep 26 22:43:45 functional-383702 kubelet[5308]: E0926 22:43:45.392299    5308 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-vmzsk" podUID="740f15b9-277e-4139-b64b-8d2c055cafd5"
	Sep 26 22:43:49 functional-383702 kubelet[5308]: E0926 22:43:49.392216    5308 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-np6td" podUID="c75bf40e-9784-4212-8c9b-bea5b99acfeb"
	Sep 26 22:43:50 functional-383702 kubelet[5308]: E0926 22:43:50.451926    5308 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758926630451703200  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:237536}  inodes_used:{value:100}}"
	Sep 26 22:43:50 functional-383702 kubelet[5308]: E0926 22:43:50.451967    5308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758926630451703200  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:237536}  inodes_used:{value:100}}"
	Sep 26 22:43:51 functional-383702 kubelet[5308]: E0926 22:43:51.396282    5308 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-srn29" podUID="e65d8d59-9833-4c81-a988-0d16ac0b13b4"
	Sep 26 22:43:54 functional-383702 kubelet[5308]: E0926 22:43:54.633648    5308 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 26 22:43:54 functional-383702 kubelet[5308]: E0926 22:43:54.633715    5308 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 26 22:43:54 functional-383702 kubelet[5308]: E0926 22:43:54.633944    5308 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-gqgkx_kubernetes-dashboard(e8be3bc3-46b3-4f2e-9feb-7c9345cb6f97): ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 26 22:43:54 functional-383702 kubelet[5308]: E0926 22:43:54.634002    5308 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gqgkx" podUID="e8be3bc3-46b3-4f2e-9feb-7c9345cb6f97"
	Sep 26 22:43:58 functional-383702 kubelet[5308]: E0926 22:43:58.392541    5308 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-vmzsk" podUID="740f15b9-277e-4139-b64b-8d2c055cafd5"
	Sep 26 22:44:00 functional-383702 kubelet[5308]: E0926 22:44:00.453375    5308 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758926640453143251  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:237536}  inodes_used:{value:100}}"
	Sep 26 22:44:00 functional-383702 kubelet[5308]: E0926 22:44:00.453411    5308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758926640453143251  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:237536}  inodes_used:{value:100}}"
	Sep 26 22:44:02 functional-383702 kubelet[5308]: E0926 22:44:02.392625    5308 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-np6td" podUID="c75bf40e-9784-4212-8c9b-bea5b99acfeb"
	Sep 26 22:44:05 functional-383702 kubelet[5308]: E0926 22:44:05.393248    5308 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gqgkx" podUID="e8be3bc3-46b3-4f2e-9feb-7c9345cb6f97"
	Sep 26 22:44:09 functional-383702 kubelet[5308]: E0926 22:44:09.392252    5308 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-vmzsk" podUID="740f15b9-277e-4139-b64b-8d2c055cafd5"
	
	
	==> storage-provisioner [71d7d7d7cb58555a95d1e8fe6617067b351970ff70ccde0f92ad7463b973bef0] <==
	I0926 22:38:08.310387       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0926 22:38:08.313832       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [f7052d19e3972521a34090086debaad98ed1becd14bad4a55e19bd8957f1e02f] <==
	W0926 22:43:44.390567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:46.394230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:46.399544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:48.402630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:48.406599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:50.409792       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:50.413622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:52.416766       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:52.421592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:54.424712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:54.428551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:56.431695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:56.435720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:58.438750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:43:58.445522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:44:00.449367       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:44:00.455267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:44:02.458228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:44:02.462474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:44:04.465707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:44:04.470173       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:44:06.473398       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:44:06.477250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:44:08.479870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:44:08.486799       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-383702 -n functional-383702
helpers_test.go:269: (dbg) Run:  kubectl --context functional-383702 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-np6td hello-node-connect-7d85dfc575-vmzsk mysql-5bb876957f-g2lbw dashboard-metrics-scraper-77bf4d6c4c-srn29 kubernetes-dashboard-855c9754f9-gqgkx
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-383702 describe pod busybox-mount hello-node-75c85bcc94-np6td hello-node-connect-7d85dfc575-vmzsk mysql-5bb876957f-g2lbw dashboard-metrics-scraper-77bf4d6c4c-srn29 kubernetes-dashboard-855c9754f9-gqgkx
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-383702 describe pod busybox-mount hello-node-75c85bcc94-np6td hello-node-connect-7d85dfc575-vmzsk mysql-5bb876957f-g2lbw dashboard-metrics-scraper-77bf4d6c4c-srn29 kubernetes-dashboard-855c9754f9-gqgkx: exit status 1 (89.624599ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-383702/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:38:59 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://92da9787b27c88237d1d1d6551b3b2591365045a9e2071cbf62dfd489bb0e804
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 26 Sep 2025 22:39:02 +0000
	      Finished:     Fri, 26 Sep 2025 22:39:02 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wwhqw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-wwhqw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m11s  default-scheduler  Successfully assigned default/busybox-mount to functional-383702
	  Normal  Pulling    5m11s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m8s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.368s (2.368s including waiting). Image size: 4631262 bytes.
	  Normal  Created    5m8s   kubelet            Created container: mount-munger
	  Normal  Started    5m8s   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-np6td
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-383702/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:38:43 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6nqhv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-6nqhv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  5m27s                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-np6td to functional-383702
	  Normal   Pulling    2m16s (x4 over 5m27s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     76s (x4 over 5m27s)    kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     76s (x4 over 5m27s)    kubelet            Error: ErrImagePull
	  Normal   BackOff    8s (x9 over 5m26s)     kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     8s (x9 over 5m26s)     kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-vmzsk
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-383702/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:38:45 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vgr9t (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vgr9t:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  5m25s                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-vmzsk to functional-383702
	  Normal   Pulling    2m18s (x4 over 5m25s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     76s (x4 over 5m24s)    kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     76s (x4 over 5m24s)    kubelet            Error: ErrImagePull
	  Normal   BackOff    1s (x10 over 5m23s)    kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     1s (x10 over 5m23s)    kubelet            Error: ImagePullBackOff
	
	
	Name:             mysql-5bb876957f-g2lbw
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-383702/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:39:14 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.12
	IPs:
	  IP:           10.244.0.12
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g55h4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-g55h4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  4m56s                default-scheduler  Successfully assigned default/mysql-5bb876957f-g2lbw to functional-383702
	  Warning  Failed     76s (x2 over 3m)     kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     76s (x2 over 3m)     kubelet            Error: ErrImagePull
	  Normal   BackOff    65s (x2 over 3m)     kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     65s (x2 over 3m)     kubelet            Error: ImagePullBackOff
	  Normal   Pulling    54s (x3 over 4m55s)  kubelet            Pulling image "docker.io/mysql:5.7"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-srn29" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-gqgkx" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-383702 describe pod busybox-mount hello-node-75c85bcc94-np6td hello-node-connect-7d85dfc575-vmzsk mysql-5bb876957f-g2lbw dashboard-metrics-scraper-77bf4d6c4c-srn29 kubernetes-dashboard-855c9754f9-gqgkx: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-383702 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-383702 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-vmzsk" [740f15b9-277e-4139-b64b-8d2c055cafd5] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-383702 -n functional-383702
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-09-26 22:48:45.907659163 +0000 UTC m=+1163.817595434
functional_test.go:1645: (dbg) Run:  kubectl --context functional-383702 describe po hello-node-connect-7d85dfc575-vmzsk -n default
functional_test.go:1645: (dbg) kubectl --context functional-383702 describe po hello-node-connect-7d85dfc575-vmzsk -n default:
Name:             hello-node-connect-7d85dfc575-vmzsk
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-383702/192.168.49.2
Start Time:       Fri, 26 Sep 2025 22:38:45 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ErrImagePull
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vgr9t (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-vgr9t:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-vmzsk to functional-383702
Normal   Pulling    4m23s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     3m21s (x5 over 9m59s)  kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     3m21s (x5 over 9m59s)  kubelet            Error: ErrImagePull
Warning  Failed     2m3s (x16 over 9m58s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    55s (x21 over 9m58s)   kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-383702 logs hello-node-connect-7d85dfc575-vmzsk -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-383702 logs hello-node-connect-7d85dfc575-vmzsk -n default: exit status 1 (72.781223ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-vmzsk" is waiting to start: image can't be pulled

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-383702 logs hello-node-connect-7d85dfc575-vmzsk -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-383702 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-vmzsk
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-383702/192.168.49.2
Start Time:       Fri, 26 Sep 2025 22:38:45 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ErrImagePull
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vgr9t (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-vgr9t:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-vmzsk to functional-383702
Normal   Pulling    4m24s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     3m22s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     3m22s (x5 over 10m)    kubelet            Error: ErrImagePull
Warning  Failed     2m4s (x16 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    56s (x21 over 9m59s)   kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-383702 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-383702 logs -l app=hello-node-connect: exit status 1 (64.535813ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-vmzsk" is waiting to start: image can't be pulled

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-383702 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-383702 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.101.254.0
IPs:                      10.101.254.0
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32164/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-383702
helpers_test.go:243: (dbg) docker inspect functional-383702:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "18074625eacea6bd2110e394c4e87daf31d042212bb2e896e77fcbd8c48a5efb",
	        "Created": "2025-09-26T22:36:40.056518629Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 238674,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-26T22:36:40.09457273Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/18074625eacea6bd2110e394c4e87daf31d042212bb2e896e77fcbd8c48a5efb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/18074625eacea6bd2110e394c4e87daf31d042212bb2e896e77fcbd8c48a5efb/hostname",
	        "HostsPath": "/var/lib/docker/containers/18074625eacea6bd2110e394c4e87daf31d042212bb2e896e77fcbd8c48a5efb/hosts",
	        "LogPath": "/var/lib/docker/containers/18074625eacea6bd2110e394c4e87daf31d042212bb2e896e77fcbd8c48a5efb/18074625eacea6bd2110e394c4e87daf31d042212bb2e896e77fcbd8c48a5efb-json.log",
	        "Name": "/functional-383702",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-383702:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-383702",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "18074625eacea6bd2110e394c4e87daf31d042212bb2e896e77fcbd8c48a5efb",
	                "LowerDir": "/var/lib/docker/overlay2/eb641d2ed975a407ebbcad1d4b5f8e78e77340e71ee29b5a53ef16b9b46e1a21-init/diff:/var/lib/docker/overlay2/539bc53ffef0f27e9ac4c376a14359e91e4b4c4b56d5675ca6caeaaab94e33fb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eb641d2ed975a407ebbcad1d4b5f8e78e77340e71ee29b5a53ef16b9b46e1a21/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eb641d2ed975a407ebbcad1d4b5f8e78e77340e71ee29b5a53ef16b9b46e1a21/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eb641d2ed975a407ebbcad1d4b5f8e78e77340e71ee29b5a53ef16b9b46e1a21/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-383702",
	                "Source": "/var/lib/docker/volumes/functional-383702/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-383702",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-383702",
	                "name.minikube.sigs.k8s.io": "functional-383702",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c14bb709d6a4da527cce048680ec7ef48cd8e5ac85d535e44da14f1b9772750c",
	            "SandboxKey": "/var/run/docker/netns/c14bb709d6a4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-383702": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:79:04:17:6b:6c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4a83464f985ef865e1ec8346088ff1329167c2e12fc9c4cd85e0e75b2304af91",
	                    "EndpointID": "8043ba5696f833472ccf69672aa395e1b32ba8046195d7cd544a87833268183f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-383702",
	                        "18074625eace"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-383702 -n functional-383702
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-383702 logs -n 25: (1.480017839s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount          │ -p functional-383702 --kill=true                                                                                                                                │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │                     │
	│ image          │ functional-383702 image load --daemon kicbase/echo-server:functional-383702 --alsologtostderr                                                                   │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │ 26 Sep 25 22:39 UTC │
	│ image          │ functional-383702 image ls                                                                                                                                      │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │ 26 Sep 25 22:39 UTC │
	│ image          │ functional-383702 image load --daemon kicbase/echo-server:functional-383702 --alsologtostderr                                                                   │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │ 26 Sep 25 22:39 UTC │
	│ image          │ functional-383702 image ls                                                                                                                                      │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │ 26 Sep 25 22:39 UTC │
	│ image          │ functional-383702 image load --daemon kicbase/echo-server:functional-383702 --alsologtostderr                                                                   │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │ 26 Sep 25 22:39 UTC │
	│ image          │ functional-383702 image ls                                                                                                                                      │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │ 26 Sep 25 22:39 UTC │
	│ image          │ functional-383702 image save kicbase/echo-server:functional-383702 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │ 26 Sep 25 22:39 UTC │
	│ image          │ functional-383702 image rm kicbase/echo-server:functional-383702 --alsologtostderr                                                                              │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │ 26 Sep 25 22:39 UTC │
	│ image          │ functional-383702 image ls                                                                                                                                      │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │ 26 Sep 25 22:39 UTC │
	│ image          │ functional-383702 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │ 26 Sep 25 22:39 UTC │
	│ image          │ functional-383702 image ls                                                                                                                                      │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │ 26 Sep 25 22:39 UTC │
	│ image          │ functional-383702 image save --daemon kicbase/echo-server:functional-383702 --alsologtostderr                                                                   │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │ 26 Sep 25 22:39 UTC │
	│ update-context │ functional-383702 update-context --alsologtostderr -v=2                                                                                                         │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:44 UTC │ 26 Sep 25 22:44 UTC │
	│ update-context │ functional-383702 update-context --alsologtostderr -v=2                                                                                                         │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:44 UTC │ 26 Sep 25 22:44 UTC │
	│ update-context │ functional-383702 update-context --alsologtostderr -v=2                                                                                                         │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:44 UTC │ 26 Sep 25 22:44 UTC │
	│ image          │ functional-383702 image ls --format short --alsologtostderr                                                                                                     │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:44 UTC │ 26 Sep 25 22:44 UTC │
	│ image          │ functional-383702 image ls --format yaml --alsologtostderr                                                                                                      │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:44 UTC │ 26 Sep 25 22:44 UTC │
	│ ssh            │ functional-383702 ssh pgrep buildkitd                                                                                                                           │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:44 UTC │                     │
	│ image          │ functional-383702 image build -t localhost/my-image:functional-383702 testdata/build --alsologtostderr                                                          │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:44 UTC │ 26 Sep 25 22:44 UTC │
	│ image          │ functional-383702 image ls                                                                                                                                      │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:44 UTC │ 26 Sep 25 22:44 UTC │
	│ image          │ functional-383702 image ls --format json --alsologtostderr                                                                                                      │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:44 UTC │ 26 Sep 25 22:44 UTC │
	│ image          │ functional-383702 image ls --format table --alsologtostderr                                                                                                     │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:44 UTC │ 26 Sep 25 22:44 UTC │
	│ service        │ functional-383702 service list                                                                                                                                  │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:48 UTC │ 26 Sep 25 22:48 UTC │
	│ service        │ functional-383702 service list -o json                                                                                                                          │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:48 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/26 22:39:08
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 22:39:08.213339  253530 out.go:360] Setting OutFile to fd 1 ...
	I0926 22:39:08.213599  253530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:39:08.213609  253530 out.go:374] Setting ErrFile to fd 2...
	I0926 22:39:08.213614  253530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:39:08.213934  253530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-208519/.minikube/bin
	I0926 22:39:08.214449  253530 out.go:368] Setting JSON to false
	I0926 22:39:08.215621  253530 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":8497,"bootTime":1758917851,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 22:39:08.215714  253530 start.go:140] virtualization: kvm guest
	I0926 22:39:08.217535  253530 out.go:179] * [functional-383702] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0926 22:39:08.219219  253530 out.go:179]   - MINIKUBE_LOCATION=21642
	I0926 22:39:08.219220  253530 notify.go:220] Checking for updates...
	I0926 22:39:08.220685  253530 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 22:39:08.222326  253530 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21642-208519/kubeconfig
	I0926 22:39:08.223663  253530 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-208519/.minikube
	I0926 22:39:08.224967  253530 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0926 22:39:08.226240  253530 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 22:39:08.227804  253530 config.go:182] Loaded profile config "functional-383702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 22:39:08.228421  253530 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 22:39:08.256215  253530 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0926 22:39:08.256361  253530 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 22:39:08.320575  253530 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-09-26 22:39:08.309855559 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 22:39:08.320680  253530 docker.go:318] overlay module found
	I0926 22:39:08.323360  253530 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0926 22:39:08.324648  253530 start.go:304] selected driver: docker
	I0926 22:39:08.324684  253530 start.go:924] validating driver "docker" against &{Name:functional-383702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-383702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 22:39:08.324792  253530 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 22:39:08.326812  253530 out.go:203] 
	W0926 22:39:08.329163  253530 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0926 22:39:08.330497  253530 out.go:203] 
	
	
	==> CRI-O <==
	Sep 26 22:47:32 functional-383702 crio[4238]: time="2025-09-26 22:47:32.392566822Z" level=info msg="Image docker.io/mysql:5.7 not found" id=50e31ccc-04b7-469e-8c4b-e41d11b3195f name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:47:42 functional-383702 crio[4238]: time="2025-09-26 22:47:42.392507175Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=a34ef1a6-5acd-4088-9a77-71ac3334df1f name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:47:42 functional-383702 crio[4238]: time="2025-09-26 22:47:42.392901257Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=a34ef1a6-5acd-4088-9a77-71ac3334df1f name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:47:45 functional-383702 crio[4238]: time="2025-09-26 22:47:45.392437866Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=14167bee-3235-48a4-8fbf-47a46f81085b name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:47:45 functional-383702 crio[4238]: time="2025-09-26 22:47:45.392647103Z" level=info msg="Image docker.io/mysql:5.7 not found" id=14167bee-3235-48a4-8fbf-47a46f81085b name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:47:57 functional-383702 crio[4238]: time="2025-09-26 22:47:57.392649807Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=d4072b75-05c4-4149-b5d0-c2120c756691 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:47:57 functional-383702 crio[4238]: time="2025-09-26 22:47:57.392909210Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=d4072b75-05c4-4149-b5d0-c2120c756691 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:47:58 functional-383702 crio[4238]: time="2025-09-26 22:47:58.623487517Z" level=info msg="Pulling image: docker.io/mysql:5.7" id=ff0fdcc7-4aa2-4c34-902d-e180fb0f3fb0 name=/runtime.v1.ImageService/PullImage
	Sep 26 22:47:58 functional-383702 crio[4238]: time="2025-09-26 22:47:58.637533675Z" level=info msg="Trying to access \"docker.io/library/mysql:5.7\""
	Sep 26 22:48:08 functional-383702 crio[4238]: time="2025-09-26 22:48:08.392719164Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=a306375b-9149-42ec-be17-769b9cf1e2b3 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:48:08 functional-383702 crio[4238]: time="2025-09-26 22:48:08.393139232Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=a306375b-9149-42ec-be17-769b9cf1e2b3 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:48:12 functional-383702 crio[4238]: time="2025-09-26 22:48:12.392162742Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=ed128b79-4100-4b56-8a71-80bd675e2d5a name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:48:12 functional-383702 crio[4238]: time="2025-09-26 22:48:12.392517326Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=ed128b79-4100-4b56-8a71-80bd675e2d5a name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:48:23 functional-383702 crio[4238]: time="2025-09-26 22:48:23.392323521Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=bfcd556e-ab06-4311-80cc-c6971bab09a9 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:48:23 functional-383702 crio[4238]: time="2025-09-26 22:48:23.392575208Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=bfcd556e-ab06-4311-80cc-c6971bab09a9 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:48:26 functional-383702 crio[4238]: time="2025-09-26 22:48:26.392527261Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=900f526f-88dc-48f4-86f6-194a5b8e69b1 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:48:26 functional-383702 crio[4238]: time="2025-09-26 22:48:26.392816792Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=900f526f-88dc-48f4-86f6-194a5b8e69b1 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:48:28 functional-383702 crio[4238]: time="2025-09-26 22:48:28.720146395Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=fa1e60dd-3384-4e56-bf3f-fe825cc20616 name=/runtime.v1.ImageService/PullImage
	Sep 26 22:48:28 functional-383702 crio[4238]: time="2025-09-26 22:48:28.720961549Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=4d546d2b-f555-46e4-afb5-02ca8bae7d6d name=/runtime.v1.ImageService/PullImage
	Sep 26 22:48:37 functional-383702 crio[4238]: time="2025-09-26 22:48:37.392892731Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=3d81c3fe-37b5-4ac4-b358-8fa08078d028 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:48:37 functional-383702 crio[4238]: time="2025-09-26 22:48:37.392930873Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=9d9c53b3-92a3-4cb9-9ffb-929bb360e12f name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:48:37 functional-383702 crio[4238]: time="2025-09-26 22:48:37.393270812Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=3d81c3fe-37b5-4ac4-b358-8fa08078d028 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:48:37 functional-383702 crio[4238]: time="2025-09-26 22:48:37.393361023Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=9d9c53b3-92a3-4cb9-9ffb-929bb360e12f name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:48:44 functional-383702 crio[4238]: time="2025-09-26 22:48:44.392129525Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=15bea1c6-507a-4bf7-a974-f32024d04bc7 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:48:44 functional-383702 crio[4238]: time="2025-09-26 22:48:44.392395325Z" level=info msg="Image docker.io/mysql:5.7 not found" id=15bea1c6-507a-4bf7-a974-f32024d04bc7 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4873b5fd9c1d2       docker.io/library/nginx@sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285       9 minutes ago       Running             myfrontend                0                   35232540f4a39       sp-pod
	92da9787b27c8       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   9 minutes ago       Exited              mount-munger              0                   986415bda1b4d       busybox-mount
	c7f3fb2ed6c31       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8       10 minutes ago      Running             nginx                     0                   8607d64b8e65b       nginx-svc
	f7052d19e3972       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Running             storage-provisioner       2                   33055bf5bc514       storage-provisioner
	0f1d90fff1994       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                      10 minutes ago      Running             kube-apiserver            0                   817512b32d33d       kube-apiserver-functional-383702
	9aad9441ea24b       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      10 minutes ago      Running             etcd                      1                   5a1d3696d5e69       etcd-functional-383702
	2f8f3416d803c       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      10 minutes ago      Running             kube-controller-manager   2                   6e3b19ba6bd5a       kube-controller-manager-functional-383702
	d002f125363d7       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      10 minutes ago      Exited              kube-controller-manager   1                   6e3b19ba6bd5a       kube-controller-manager-functional-383702
	f2b96981f3cea       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      10 minutes ago      Running             kube-scheduler            1                   e67e6055c8b97       kube-scheduler-functional-383702
	25c8780bc9df0       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      10 minutes ago      Running             kindnet-cni               1                   1bd0dce351379       kindnet-h9qvl
	71d7d7d7cb585       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       1                   33055bf5bc514       storage-provisioner
	649d32bc054df       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      10 minutes ago      Running             kube-proxy                1                   63ca8b4fd4bec       kube-proxy-27n4x
	8515d054eecd5       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      10 minutes ago      Running             coredns                   1                   189b543c8cd5c       coredns-66bc5c9577-sxzwb
	a720d09796fe8       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 minutes ago      Exited              coredns                   0                   189b543c8cd5c       coredns-66bc5c9577-sxzwb
	531d4b0a6adad       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      11 minutes ago      Exited              kindnet-cni               0                   1bd0dce351379       kindnet-h9qvl
	85c3ffe817ca8       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      11 minutes ago      Exited              kube-proxy                0                   63ca8b4fd4bec       kube-proxy-27n4x
	db73bb67b2a2d       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      11 minutes ago      Exited              kube-scheduler            0                   e67e6055c8b97       kube-scheduler-functional-383702
	f90cfaf912f69       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      11 minutes ago      Exited              etcd                      0                   5a1d3696d5e69       etcd-functional-383702
	
	
	==> coredns [8515d054eecd5a444f86dd4f43d164940d668d155f81dc6c68bb9d234a92876d] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46095 - 27369 "HINFO IN 2769917989759994095.5307631164563384989. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.021713353s
	
	
	==> coredns [a720d09796fe8c6300b07136f5a321c333362dd5a3c25385c7ee30aaf1d7ed90] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55613 - 20235 "HINFO IN 7109503854822832070.3156555241200074520. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.013759315s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-383702
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-383702
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47
	                    minikube.k8s.io/name=functional-383702
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_26T22_36_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 26 Sep 2025 22:36:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-383702
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 26 Sep 2025 22:48:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 26 Sep 2025 22:44:29 +0000   Fri, 26 Sep 2025 22:36:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 26 Sep 2025 22:44:29 +0000   Fri, 26 Sep 2025 22:36:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 26 Sep 2025 22:44:29 +0000   Fri, 26 Sep 2025 22:36:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 26 Sep 2025 22:44:29 +0000   Fri, 26 Sep 2025 22:37:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-383702
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 a0a02ced94d640b981affd2bc93c81c4
	  System UUID:                f593ccff-392d-4c4d-a0b7-5fd374fb4177
	  Boot ID:                    ce94279f-6745-4217-952d-f9fda1755c22
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-np6td                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-vmzsk           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-g2lbw                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     9m33s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m47s
	  kube-system                 coredns-66bc5c9577-sxzwb                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-383702                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-h9qvl                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-383702              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-383702     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-27n4x                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-383702              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-srn29    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m38s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-gqgkx         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-383702 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-383702 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node functional-383702 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-383702 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-383702 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-383702 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           11m                node-controller  Node functional-383702 event: Registered Node functional-383702 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-383702 status is now: NodeReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-383702 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-383702 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-383702 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-383702 event: Registered Node functional-383702 in Controller
	
	
	==> dmesg <==
	[  +0.088607] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025515] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.894785] kauditd_printk_skb: 47 callbacks suppressed
	[Sep26 22:33] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5e 41 1c 6c 94 13 96 ca 2b d2 b8 98 08 00
	[  +1.003220] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 5e 41 1c 6c 94 13 96 ca 2b d2 b8 98 08 00
	[  +1.023896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 5e 41 1c 6c 94 13 96 ca 2b d2 b8 98 08 00
	[  +1.023850] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 5e 41 1c 6c 94 13 96 ca 2b d2 b8 98 08 00
	[  +1.023878] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 5e 41 1c 6c 94 13 96 ca 2b d2 b8 98 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 5e 41 1c 6c 94 13 96 ca 2b d2 b8 98 08 00
	[  +2.048746] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5e 41 1c 6c 94 13 96 ca 2b d2 b8 98 08 00
	[  +4.030628] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 5e 41 1c 6c 94 13 96 ca 2b d2 b8 98 08 00
	[  +8.319153] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 5e 41 1c 6c 94 13 96 ca 2b d2 b8 98 08 00
	[ +16.382271] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5e 41 1c 6c 94 13 96 ca 2b d2 b8 98 08 00
	[Sep26 22:34] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000033] ll header: 00000000: 5e 41 1c 6c 94 13 96 ca 2b d2 b8 98 08 00
	
	
	==> etcd [9aad9441ea24b0821d0b27d2b6f00a7097cfadb6fe6a12eef6ed624fbdd9b988] <==
	{"level":"warn","ts":"2025-09-26T22:38:22.176687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.182665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.189770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.196105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.202297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.209041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.216245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.222734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.228820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.236172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.242527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.249150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.256372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.262550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.268607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.275673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.282122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.288232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.305592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.311924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.320124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.365195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38824","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-26T22:48:21.903564Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1126}
	{"level":"info","ts":"2025-09-26T22:48:21.922953Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1126,"took":"19.033369ms","hash":2599250604,"current-db-size-bytes":3584000,"current-db-size":"3.6 MB","current-db-size-in-use-bytes":1748992,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-09-26T22:48:21.922999Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2599250604,"revision":1126,"compact-revision":-1}
	
	
	==> etcd [f90cfaf912f6987b18aac3393fb4cda3e0e222a40622257ee440fb60cd895054] <==
	{"level":"warn","ts":"2025-09-26T22:36:51.915127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:51.921316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:51.928043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:51.945530Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:51.951934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:51.958381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:52.006133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54854","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-26T22:38:18.020980Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-26T22:38:18.021074Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-383702","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-26T22:38:18.021184Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-26T22:38:18.022756Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-26T22:38:18.022817Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-26T22:38:18.022838Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-09-26T22:38:18.022889Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-26T22:38:18.022891Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-26T22:38:18.022962Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-26T22:38:18.023012Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-26T22:38:18.023020Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-26T22:38:18.022953Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-26T22:38:18.023043Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-26T22:38:18.023050Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-26T22:38:18.024946Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-26T22:38:18.025013Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-26T22:38:18.025051Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-26T22:38:18.025062Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-383702","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 22:48:47 up  2:31,  0 users,  load average: 0.16, 1.67, 21.02
	Linux functional-383702 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [25c8780bc9df0394418a978d4dcd37a1fc9e8a43e3e0f927e81a42e3af478801] <==
	I0926 22:46:38.644529       1 main.go:301] handling current node
	I0926 22:46:48.652273       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:46:48.652311       1 main.go:301] handling current node
	I0926 22:46:58.645165       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:46:58.645201       1 main.go:301] handling current node
	I0926 22:47:08.653292       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:47:08.653329       1 main.go:301] handling current node
	I0926 22:47:18.653219       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:47:18.653262       1 main.go:301] handling current node
	I0926 22:47:28.643826       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:47:28.643873       1 main.go:301] handling current node
	I0926 22:47:38.652191       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:47:38.652232       1 main.go:301] handling current node
	I0926 22:47:48.646172       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:47:48.646223       1 main.go:301] handling current node
	I0926 22:47:58.644296       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:47:58.644328       1 main.go:301] handling current node
	I0926 22:48:08.653476       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:48:08.653523       1 main.go:301] handling current node
	I0926 22:48:18.649299       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:48:18.649337       1 main.go:301] handling current node
	I0926 22:48:28.645183       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:48:28.645221       1 main.go:301] handling current node
	I0926 22:48:38.653237       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:48:38.653279       1 main.go:301] handling current node
	
	
	==> kindnet [531d4b0a6adad39d0c664b36894d865492c0c437bd84c8b98e737b8bc27b4ff6] <==
	I0926 22:37:01.223127       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0926 22:37:01.223432       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0926 22:37:01.223573       1 main.go:148] setting mtu 1500 for CNI 
	I0926 22:37:01.223592       1 main.go:178] kindnetd IP family: "ipv4"
	I0926 22:37:01.223622       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-26T22:37:01Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0926 22:37:01.428167       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0926 22:37:01.428250       1 controller.go:381] "Waiting for informer caches to sync"
	I0926 22:37:01.428668       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0926 22:37:01.429146       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0926 22:37:31.429573       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E0926 22:37:31.429578       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E0926 22:37:31.429577       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E0926 22:37:31.429631       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I0926 22:37:32.829571       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0926 22:37:32.829605       1 metrics.go:72] Registering metrics
	I0926 22:37:32.829662       1 controller.go:711] "Syncing nftables rules"
	I0926 22:37:41.436190       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:37:41.436257       1 main.go:301] handling current node
	I0926 22:37:51.435172       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:37:51.435218       1 main.go:301] handling current node
	I0926 22:38:01.432631       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:38:01.432669       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0f1d90fff19941eda0ac9f0e8c915241a757b6db2dcaf4db40398d4640877683] <==
	I0926 22:38:45.106292       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.98.198.196"}
	I0926 22:38:45.591413       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.101.254.0"}
	E0926 22:38:59.806926       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:50086: use of closed network connection
	E0926 22:39:07.782709       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:42046: use of closed network connection
	I0926 22:39:09.264496       1 controller.go:667] quota admission added evaluator for: namespaces
	I0926 22:39:09.389245       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.4.188"}
	I0926 22:39:09.400137       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.0.7"}
	I0926 22:39:14.899740       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.111.36.26"}
	I0926 22:39:27.279947       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:39:40.167657       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:40:41.816993       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:40:54.178554       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:41:56.347046       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:42:11.725080       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:43:20.281575       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:43:28.063892       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:44:26.231773       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:44:38.188877       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:45:43.192688       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:45:53.118801       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:47:01.411147       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:47:20.493630       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:48:17.207206       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:48:22.747326       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0926 22:48:37.901687       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [2f8f3416d803c18dee54a93529d5dfcf746f63352015dd5c8f9cc13d2fc5c6f1] <==
	I0926 22:38:26.145530       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0926 22:38:26.145566       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0926 22:38:26.147756       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0926 22:38:26.147787       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0926 22:38:26.148937       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0926 22:38:26.148966       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0926 22:38:26.149033       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0926 22:38:26.149047       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0926 22:38:26.149057       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0926 22:38:26.149115       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0926 22:38:26.149158       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0926 22:38:26.149508       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0926 22:38:26.149516       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0926 22:38:26.152521       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0926 22:38:26.153763       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0926 22:38:26.153764       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0926 22:38:26.157054       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0926 22:38:26.171128       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0926 22:38:26.186442       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0926 22:39:09.323398       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:39:09.327773       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:39:09.331324       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:39:09.334272       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:39:09.334831       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:39:09.339730       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [d002f125363d7cabaab809253fe8b16078d6f0a4a8a2cefc0f977363ea283a0c] <==
	I0926 22:38:09.246246       1 serving.go:386] Generated self-signed cert in-memory
	I0926 22:38:09.798602       1 controllermanager.go:191] "Starting" version="v1.34.0"
	I0926 22:38:09.798624       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:38:09.799874       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0926 22:38:09.799876       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0926 22:38:09.800159       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0926 22:38:09.800188       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0926 22:38:19.801762       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-proxy [649d32bc054dfc495e951045c142935673ec2afcf84fe1b7ac108730602f4073] <==
	I0926 22:38:08.407573       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0926 22:38:08.408843       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-383702&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:38:09.749483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-383702&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:38:11.860278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-383702&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:38:15.529555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-383702&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I0926 22:38:23.607794       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0926 22:38:23.607841       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0926 22:38:23.607947       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0926 22:38:23.627021       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0926 22:38:23.627103       1 server_linux.go:132] "Using iptables Proxier"
	I0926 22:38:23.632516       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0926 22:38:23.632955       1 server.go:527] "Version info" version="v1.34.0"
	I0926 22:38:23.632995       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:38:23.634342       1 config.go:403] "Starting serviceCIDR config controller"
	I0926 22:38:23.634364       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0926 22:38:23.634477       1 config.go:200] "Starting service config controller"
	I0926 22:38:23.634490       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0926 22:38:23.634487       1 config.go:309] "Starting node config controller"
	I0926 22:38:23.634502       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0926 22:38:23.634507       1 config.go:106] "Starting endpoint slice config controller"
	I0926 22:38:23.634511       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0926 22:38:23.634512       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0926 22:38:23.735461       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0926 22:38:23.735539       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0926 22:38:23.735488       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [85c3ffe817ca877caa2254a21d5dd25f802610ae24d4f2564968a7fef018106a] <==
	I0926 22:37:01.068434       1 server_linux.go:53] "Using iptables proxy"
	I0926 22:37:01.136995       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0926 22:37:01.237563       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0926 22:37:01.237606       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0926 22:37:01.237688       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0926 22:37:01.256792       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0926 22:37:01.256866       1 server_linux.go:132] "Using iptables Proxier"
	I0926 22:37:01.262250       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0926 22:37:01.262677       1 server.go:527] "Version info" version="v1.34.0"
	I0926 22:37:01.262733       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:37:01.263902       1 config.go:200] "Starting service config controller"
	I0926 22:37:01.263923       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0926 22:37:01.263948       1 config.go:403] "Starting serviceCIDR config controller"
	I0926 22:37:01.263974       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0926 22:37:01.263977       1 config.go:106] "Starting endpoint slice config controller"
	I0926 22:37:01.263984       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0926 22:37:01.264010       1 config.go:309] "Starting node config controller"
	I0926 22:37:01.264041       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0926 22:37:01.365066       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0926 22:37:01.365069       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0926 22:37:01.365120       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0926 22:37:01.365162       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [db73bb67b2a2d2365e200487e84eccc01f234727653f3c7874c52237af5df7da] <==
	E0926 22:36:52.625973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0926 22:36:52.626230       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0926 22:36:52.626667       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0926 22:36:52.626732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:36:52.626914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0926 22:36:52.626924       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0926 22:36:52.627021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0926 22:36:52.627041       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0926 22:36:52.627104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0926 22:36:52.627167       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0926 22:36:52.627355       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0926 22:36:52.627929       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0926 22:36:53.434654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0926 22:36:53.434658       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0926 22:36:53.464130       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0926 22:36:53.532612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0926 22:36:53.599951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0926 22:36:53.604071       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I0926 22:36:54.122224       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 22:38:07.736943       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 22:38:07.737060       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0926 22:38:07.737233       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0926 22:38:07.737328       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0926 22:38:07.737340       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0926 22:38:07.737367       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f2b96981f3ceaa989bd478a919e9a70994001f7bc68ddea7326c32df7f23c4e5] <==
	E0926 22:38:13.250399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0926 22:38:13.396218       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0926 22:38:13.541353       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0926 22:38:13.800508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0926 22:38:13.921958       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:38:15.534678       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0926 22:38:16.144669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0926 22:38:16.468679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0926 22:38:16.491112       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0926 22:38:16.514809       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0926 22:38:16.967225       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0926 22:38:17.016006       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0926 22:38:17.423400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0926 22:38:17.522224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0926 22:38:17.801647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0926 22:38:17.805313       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0926 22:38:17.818178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0926 22:38:18.341101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0926 22:38:18.393856       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0926 22:38:18.477682       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0926 22:38:18.626764       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0926 22:38:18.770550       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:38:18.886743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0926 22:38:19.557699       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I0926 22:38:24.811408       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 26 22:48:20 functional-383702 kubelet[5308]: E0926 22:48:20.497535    5308 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758926900497276457  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:262385}  inodes_used:{value:116}}"
	Sep 26 22:48:20 functional-383702 kubelet[5308]: E0926 22:48:20.497568    5308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758926900497276457  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:262385}  inodes_used:{value:116}}"
	Sep 26 22:48:23 functional-383702 kubelet[5308]: E0926 22:48:23.392922    5308 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-srn29" podUID="e65d8d59-9833-4c81-a988-0d16ac0b13b4"
	Sep 26 22:48:26 functional-383702 kubelet[5308]: E0926 22:48:26.393193    5308 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gqgkx" podUID="e8be3bc3-46b3-4f2e-9feb-7c9345cb6f97"
	Sep 26 22:48:28 functional-383702 kubelet[5308]: E0926 22:48:28.719583    5308 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Sep 26 22:48:28 functional-383702 kubelet[5308]: E0926 22:48:28.719658    5308 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Sep 26 22:48:28 functional-383702 kubelet[5308]: E0926 22:48:28.719902    5308 kuberuntime_manager.go:1449] "Unhandled Error" err="container mysql start failed in pod mysql-5bb876957f-g2lbw_default(7d547f76-64a5-412d-887e-fec4a84af02a): ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 26 22:48:28 functional-383702 kubelet[5308]: E0926 22:48:28.719994    5308 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-g2lbw" podUID="7d547f76-64a5-412d-887e-fec4a84af02a"
	Sep 26 22:48:28 functional-383702 kubelet[5308]: E0926 22:48:28.720582    5308 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = short-name \"kicbase/echo-server:latest\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\"" image="kicbase/echo-server:latest"
	Sep 26 22:48:28 functional-383702 kubelet[5308]: E0926 22:48:28.720628    5308 kuberuntime_image.go:43] "Failed to pull image" err="short-name \"kicbase/echo-server:latest\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\"" image="kicbase/echo-server:latest"
	Sep 26 22:48:28 functional-383702 kubelet[5308]: E0926 22:48:28.720918    5308 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-75c85bcc94-np6td_default(c75bf40e-9784-4212-8c9b-bea5b99acfeb): ErrImagePull: short-name \"kicbase/echo-server:latest\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\"" logger="UnhandledError"
	Sep 26 22:48:28 functional-383702 kubelet[5308]: E0926 22:48:28.721044    5308 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-np6td" podUID="c75bf40e-9784-4212-8c9b-bea5b99acfeb"
	Sep 26 22:48:28 functional-383702 kubelet[5308]: E0926 22:48:28.721284    5308 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = short-name \"kicbase/echo-server:latest\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\"" image="kicbase/echo-server:latest"
	Sep 26 22:48:28 functional-383702 kubelet[5308]: E0926 22:48:28.721319    5308 kuberuntime_image.go:43] "Failed to pull image" err="short-name \"kicbase/echo-server:latest\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\"" image="kicbase/echo-server:latest"
	Sep 26 22:48:28 functional-383702 kubelet[5308]: E0926 22:48:28.721396    5308 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-connect-7d85dfc575-vmzsk_default(740f15b9-277e-4139-b64b-8d2c055cafd5): ErrImagePull: short-name \"kicbase/echo-server:latest\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\"" logger="UnhandledError"
	Sep 26 22:48:28 functional-383702 kubelet[5308]: E0926 22:48:28.722527    5308 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-vmzsk" podUID="740f15b9-277e-4139-b64b-8d2c055cafd5"
	Sep 26 22:48:30 functional-383702 kubelet[5308]: E0926 22:48:30.500058    5308 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758926910499770672  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:262385}  inodes_used:{value:116}}"
	Sep 26 22:48:30 functional-383702 kubelet[5308]: E0926 22:48:30.500117    5308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758926910499770672  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:262385}  inodes_used:{value:116}}"
	Sep 26 22:48:37 functional-383702 kubelet[5308]: E0926 22:48:37.393623    5308 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-srn29" podUID="e65d8d59-9833-4c81-a988-0d16ac0b13b4"
	Sep 26 22:48:37 functional-383702 kubelet[5308]: E0926 22:48:37.393714    5308 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gqgkx" podUID="e8be3bc3-46b3-4f2e-9feb-7c9345cb6f97"
	Sep 26 22:48:40 functional-383702 kubelet[5308]: E0926 22:48:40.501397    5308 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758926920501213570  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:262385}  inodes_used:{value:116}}"
	Sep 26 22:48:40 functional-383702 kubelet[5308]: E0926 22:48:40.501428    5308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758926920501213570  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:262385}  inodes_used:{value:116}}"
	Sep 26 22:48:43 functional-383702 kubelet[5308]: E0926 22:48:43.392027    5308 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-vmzsk" podUID="740f15b9-277e-4139-b64b-8d2c055cafd5"
	Sep 26 22:48:44 functional-383702 kubelet[5308]: E0926 22:48:44.392006    5308 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-np6td" podUID="c75bf40e-9784-4212-8c9b-bea5b99acfeb"
	Sep 26 22:48:44 functional-383702 kubelet[5308]: E0926 22:48:44.392669    5308 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-g2lbw" podUID="7d547f76-64a5-412d-887e-fec4a84af02a"
	
	
	==> storage-provisioner [71d7d7d7cb58555a95d1e8fe6617067b351970ff70ccde0f92ad7463b973bef0] <==
	I0926 22:38:08.310387       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0926 22:38:08.313832       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [f7052d19e3972521a34090086debaad98ed1becd14bad4a55e19bd8957f1e02f] <==
	W0926 22:48:23.460617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:48:25.464120       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:48:25.469639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:48:27.472812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:48:27.477262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:48:29.481367       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:48:29.486182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:48:31.489249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:48:31.493106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:48:33.497039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:48:33.501050       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:48:35.504840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:48:35.508822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:48:37.512440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:48:37.517381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:48:39.520885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:48:39.524942       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:48:41.528720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:48:41.532477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:48:43.535910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:48:43.540767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:48:45.544168       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:48:45.548162       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:48:47.551950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:48:47.557752       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-383702 -n functional-383702
helpers_test.go:269: (dbg) Run:  kubectl --context functional-383702 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-np6td hello-node-connect-7d85dfc575-vmzsk mysql-5bb876957f-g2lbw dashboard-metrics-scraper-77bf4d6c4c-srn29 kubernetes-dashboard-855c9754f9-gqgkx
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-383702 describe pod busybox-mount hello-node-75c85bcc94-np6td hello-node-connect-7d85dfc575-vmzsk mysql-5bb876957f-g2lbw dashboard-metrics-scraper-77bf4d6c4c-srn29 kubernetes-dashboard-855c9754f9-gqgkx
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-383702 describe pod busybox-mount hello-node-75c85bcc94-np6td hello-node-connect-7d85dfc575-vmzsk mysql-5bb876957f-g2lbw dashboard-metrics-scraper-77bf4d6c4c-srn29 kubernetes-dashboard-855c9754f9-gqgkx: exit status 1 (88.194771ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-383702/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:38:59 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://92da9787b27c88237d1d1d6551b3b2591365045a9e2071cbf62dfd489bb0e804
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 26 Sep 2025 22:39:02 +0000
	      Finished:     Fri, 26 Sep 2025 22:39:02 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wwhqw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-wwhqw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m49s  default-scheduler  Successfully assigned default/busybox-mount to functional-383702
	  Normal  Pulling    9m49s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m46s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.368s (2.368s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m46s  kubelet            Created container: mount-munger
	  Normal  Started    9m46s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-np6td
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-383702/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:38:43 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6nqhv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-6nqhv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-np6td to functional-383702
	  Normal   Pulling    4m22s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     3m24s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     3m24s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    4s (x23 over 10m)    kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4s (x23 over 10m)    kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-vmzsk
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-383702/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:38:45 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vgr9t (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vgr9t:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-vmzsk to functional-383702
	  Normal   Pulling    4m26s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     3m24s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     3m24s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m6s (x16 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    58s (x21 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             mysql-5bb876957f-g2lbw
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-383702/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:39:14 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.12
	IPs:
	  IP:           10.244.0.12
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g55h4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-g55h4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  9m34s                default-scheduler  Successfully assigned default/mysql-5bb876957f-g2lbw to functional-383702
	  Warning  Failed     3m54s                kubelet            Failed to pull image "docker.io/mysql:5.7": loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    63s (x5 over 9m33s)  kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     20s (x4 over 7m38s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     20s (x5 over 7m38s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    4s (x11 over 7m38s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     4s (x11 over 7m38s)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-srn29" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-gqgkx" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-383702 describe pod busybox-mount hello-node-75c85bcc94-np6td hello-node-connect-7d85dfc575-vmzsk mysql-5bb876957f-g2lbw dashboard-metrics-scraper-77bf4d6c4c-srn29 kubernetes-dashboard-855c9754f9-gqgkx: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.10s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (602.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-383702 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-g2lbw" [7d547f76-64a5-412d-887e-fec4a84af02a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
E0926 22:39:56.013875  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:42:12.143136  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:42:39.855568  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1804: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1804: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-383702 -n functional-383702
functional_test.go:1804: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2025-09-26 22:49:15.245044779 +0000 UTC m=+1193.154981051
functional_test.go:1804: (dbg) Run:  kubectl --context functional-383702 describe po mysql-5bb876957f-g2lbw -n default
functional_test.go:1804: (dbg) kubectl --context functional-383702 describe po mysql-5bb876957f-g2lbw -n default:
Name:             mysql-5bb876957f-g2lbw
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-383702/192.168.49.2
Start Time:       Fri, 26 Sep 2025 22:39:14 +0000
Labels:           app=mysql
pod-template-hash=5bb876957f
Annotations:      <none>
Status:           Pending
IP:               10.244.0.12
IPs:
IP:           10.244.0.12
Controlled By:  ReplicaSet/mysql-5bb876957f
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP (mysql)
Host Port:      0/TCP (mysql)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g55h4 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-g55h4:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                 From               Message
----     ------     ----                ----               -------
Normal   Scheduled  10m                 default-scheduler  Successfully assigned default/mysql-5bb876957f-g2lbw to functional-383702
Warning  Failed     4m21s               kubelet            Failed to pull image "docker.io/mysql:5.7": loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    90s (x5 over 10m)   kubelet            Pulling image "docker.io/mysql:5.7"
Warning  Failed     47s (x4 over 8m5s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     47s (x5 over 8m5s)  kubelet            Error: ErrImagePull
Normal   BackOff    6s (x13 over 8m5s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
Warning  Failed     6s (x13 over 8m5s)  kubelet            Error: ImagePullBackOff
functional_test.go:1804: (dbg) Run:  kubectl --context functional-383702 logs mysql-5bb876957f-g2lbw -n default
functional_test.go:1804: (dbg) Non-zero exit: kubectl --context functional-383702 logs mysql-5bb876957f-g2lbw -n default: exit status 1 (62.413497ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-5bb876957f-g2lbw" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1804: kubectl --context functional-383702 logs mysql-5bb876957f-g2lbw -n default: exit status 1
functional_test.go:1806: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-383702
helpers_test.go:243: (dbg) docker inspect functional-383702:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "18074625eacea6bd2110e394c4e87daf31d042212bb2e896e77fcbd8c48a5efb",
	        "Created": "2025-09-26T22:36:40.056518629Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 238674,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-26T22:36:40.09457273Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/18074625eacea6bd2110e394c4e87daf31d042212bb2e896e77fcbd8c48a5efb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/18074625eacea6bd2110e394c4e87daf31d042212bb2e896e77fcbd8c48a5efb/hostname",
	        "HostsPath": "/var/lib/docker/containers/18074625eacea6bd2110e394c4e87daf31d042212bb2e896e77fcbd8c48a5efb/hosts",
	        "LogPath": "/var/lib/docker/containers/18074625eacea6bd2110e394c4e87daf31d042212bb2e896e77fcbd8c48a5efb/18074625eacea6bd2110e394c4e87daf31d042212bb2e896e77fcbd8c48a5efb-json.log",
	        "Name": "/functional-383702",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-383702:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-383702",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "18074625eacea6bd2110e394c4e87daf31d042212bb2e896e77fcbd8c48a5efb",
	                "LowerDir": "/var/lib/docker/overlay2/eb641d2ed975a407ebbcad1d4b5f8e78e77340e71ee29b5a53ef16b9b46e1a21-init/diff:/var/lib/docker/overlay2/539bc53ffef0f27e9ac4c376a14359e91e4b4c4b56d5675ca6caeaaab94e33fb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eb641d2ed975a407ebbcad1d4b5f8e78e77340e71ee29b5a53ef16b9b46e1a21/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eb641d2ed975a407ebbcad1d4b5f8e78e77340e71ee29b5a53ef16b9b46e1a21/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eb641d2ed975a407ebbcad1d4b5f8e78e77340e71ee29b5a53ef16b9b46e1a21/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-383702",
	                "Source": "/var/lib/docker/volumes/functional-383702/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-383702",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-383702",
	                "name.minikube.sigs.k8s.io": "functional-383702",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c14bb709d6a4da527cce048680ec7ef48cd8e5ac85d535e44da14f1b9772750c",
	            "SandboxKey": "/var/run/docker/netns/c14bb709d6a4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-383702": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:79:04:17:6b:6c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4a83464f985ef865e1ec8346088ff1329167c2e12fc9c4cd85e0e75b2304af91",
	                    "EndpointID": "8043ba5696f833472ccf69672aa395e1b32ba8046195d7cd544a87833268183f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-383702",
	                        "18074625eace"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-383702 -n functional-383702
helpers_test.go:252: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-383702 logs -n 25: (1.441984337s)
helpers_test.go:260: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-383702 image load --daemon kicbase/echo-server:functional-383702 --alsologtostderr                                                                   │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │ 26 Sep 25 22:39 UTC │
	│ image          │ functional-383702 image ls                                                                                                                                      │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │ 26 Sep 25 22:39 UTC │
	│ image          │ functional-383702 image load --daemon kicbase/echo-server:functional-383702 --alsologtostderr                                                                   │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │ 26 Sep 25 22:39 UTC │
	│ image          │ functional-383702 image ls                                                                                                                                      │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │ 26 Sep 25 22:39 UTC │
	│ image          │ functional-383702 image save kicbase/echo-server:functional-383702 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │ 26 Sep 25 22:39 UTC │
	│ image          │ functional-383702 image rm kicbase/echo-server:functional-383702 --alsologtostderr                                                                              │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │ 26 Sep 25 22:39 UTC │
	│ image          │ functional-383702 image ls                                                                                                                                      │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │ 26 Sep 25 22:39 UTC │
	│ image          │ functional-383702 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │ 26 Sep 25 22:39 UTC │
	│ image          │ functional-383702 image ls                                                                                                                                      │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │ 26 Sep 25 22:39 UTC │
	│ image          │ functional-383702 image save --daemon kicbase/echo-server:functional-383702 --alsologtostderr                                                                   │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:39 UTC │ 26 Sep 25 22:39 UTC │
	│ update-context │ functional-383702 update-context --alsologtostderr -v=2                                                                                                         │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:44 UTC │ 26 Sep 25 22:44 UTC │
	│ update-context │ functional-383702 update-context --alsologtostderr -v=2                                                                                                         │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:44 UTC │ 26 Sep 25 22:44 UTC │
	│ update-context │ functional-383702 update-context --alsologtostderr -v=2                                                                                                         │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:44 UTC │ 26 Sep 25 22:44 UTC │
	│ image          │ functional-383702 image ls --format short --alsologtostderr                                                                                                     │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:44 UTC │ 26 Sep 25 22:44 UTC │
	│ image          │ functional-383702 image ls --format yaml --alsologtostderr                                                                                                      │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:44 UTC │ 26 Sep 25 22:44 UTC │
	│ ssh            │ functional-383702 ssh pgrep buildkitd                                                                                                                           │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:44 UTC │                     │
	│ image          │ functional-383702 image build -t localhost/my-image:functional-383702 testdata/build --alsologtostderr                                                          │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:44 UTC │ 26 Sep 25 22:44 UTC │
	│ image          │ functional-383702 image ls                                                                                                                                      │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:44 UTC │ 26 Sep 25 22:44 UTC │
	│ image          │ functional-383702 image ls --format json --alsologtostderr                                                                                                      │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:44 UTC │ 26 Sep 25 22:44 UTC │
	│ image          │ functional-383702 image ls --format table --alsologtostderr                                                                                                     │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:44 UTC │ 26 Sep 25 22:44 UTC │
	│ service        │ functional-383702 service list                                                                                                                                  │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:48 UTC │ 26 Sep 25 22:48 UTC │
	│ service        │ functional-383702 service list -o json                                                                                                                          │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:48 UTC │ 26 Sep 25 22:48 UTC │
	│ service        │ functional-383702 service --namespace=default --https --url hello-node                                                                                          │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:48 UTC │                     │
	│ service        │ functional-383702 service hello-node --url --format={{.IP}}                                                                                                     │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:48 UTC │                     │
	│ service        │ functional-383702 service hello-node --url                                                                                                                      │ functional-383702 │ jenkins │ v1.37.0 │ 26 Sep 25 22:48 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/26 22:39:08
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 22:39:08.213339  253530 out.go:360] Setting OutFile to fd 1 ...
	I0926 22:39:08.213599  253530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:39:08.213609  253530 out.go:374] Setting ErrFile to fd 2...
	I0926 22:39:08.213614  253530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:39:08.213934  253530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-208519/.minikube/bin
	I0926 22:39:08.214449  253530 out.go:368] Setting JSON to false
	I0926 22:39:08.215621  253530 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":8497,"bootTime":1758917851,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 22:39:08.215714  253530 start.go:140] virtualization: kvm guest
	I0926 22:39:08.217535  253530 out.go:179] * [functional-383702] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0926 22:39:08.219219  253530 out.go:179]   - MINIKUBE_LOCATION=21642
	I0926 22:39:08.219220  253530 notify.go:220] Checking for updates...
	I0926 22:39:08.220685  253530 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 22:39:08.222326  253530 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21642-208519/kubeconfig
	I0926 22:39:08.223663  253530 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-208519/.minikube
	I0926 22:39:08.224967  253530 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0926 22:39:08.226240  253530 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 22:39:08.227804  253530 config.go:182] Loaded profile config "functional-383702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 22:39:08.228421  253530 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 22:39:08.256215  253530 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0926 22:39:08.256361  253530 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 22:39:08.320575  253530 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-09-26 22:39:08.309855559 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 22:39:08.320680  253530 docker.go:318] overlay module found
	I0926 22:39:08.323360  253530 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0926 22:39:08.324648  253530 start.go:304] selected driver: docker
	I0926 22:39:08.324684  253530 start.go:924] validating driver "docker" against &{Name:functional-383702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-383702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 22:39:08.324792  253530 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 22:39:08.326812  253530 out.go:203] 
	W0926 22:39:08.329163  253530 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0926 22:39:08.330497  253530 out.go:203] 
	
	
	==> CRI-O <==
	Sep 26 22:48:23 functional-383702 crio[4238]: time="2025-09-26 22:48:23.392575208Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=bfcd556e-ab06-4311-80cc-c6971bab09a9 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:48:26 functional-383702 crio[4238]: time="2025-09-26 22:48:26.392527261Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=900f526f-88dc-48f4-86f6-194a5b8e69b1 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:48:26 functional-383702 crio[4238]: time="2025-09-26 22:48:26.392816792Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=900f526f-88dc-48f4-86f6-194a5b8e69b1 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:48:28 functional-383702 crio[4238]: time="2025-09-26 22:48:28.720146395Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=fa1e60dd-3384-4e56-bf3f-fe825cc20616 name=/runtime.v1.ImageService/PullImage
	Sep 26 22:48:28 functional-383702 crio[4238]: time="2025-09-26 22:48:28.720961549Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=4d546d2b-f555-46e4-afb5-02ca8bae7d6d name=/runtime.v1.ImageService/PullImage
	Sep 26 22:48:37 functional-383702 crio[4238]: time="2025-09-26 22:48:37.392892731Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=3d81c3fe-37b5-4ac4-b358-8fa08078d028 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:48:37 functional-383702 crio[4238]: time="2025-09-26 22:48:37.392930873Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=9d9c53b3-92a3-4cb9-9ffb-929bb360e12f name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:48:37 functional-383702 crio[4238]: time="2025-09-26 22:48:37.393270812Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=3d81c3fe-37b5-4ac4-b358-8fa08078d028 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:48:37 functional-383702 crio[4238]: time="2025-09-26 22:48:37.393361023Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=9d9c53b3-92a3-4cb9-9ffb-929bb360e12f name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:48:44 functional-383702 crio[4238]: time="2025-09-26 22:48:44.392129525Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=15bea1c6-507a-4bf7-a974-f32024d04bc7 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:48:44 functional-383702 crio[4238]: time="2025-09-26 22:48:44.392395325Z" level=info msg="Image docker.io/mysql:5.7 not found" id=15bea1c6-507a-4bf7-a974-f32024d04bc7 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:48:51 functional-383702 crio[4238]: time="2025-09-26 22:48:51.392079375Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=019901ae-8c75-4648-8568-25c632e881ac name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:48:51 functional-383702 crio[4238]: time="2025-09-26 22:48:51.392483049Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=019901ae-8c75-4648-8568-25c632e881ac name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:48:52 functional-383702 crio[4238]: time="2025-09-26 22:48:52.393430953Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=398dcff8-2113-4f57-8bf0-241a94ef78c6 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:48:52 functional-383702 crio[4238]: time="2025-09-26 22:48:52.393662499Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=398dcff8-2113-4f57-8bf0-241a94ef78c6 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:48:57 functional-383702 crio[4238]: time="2025-09-26 22:48:57.392323817Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=7077acc5-fbae-473e-bfce-3285115b1a6e name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:48:57 functional-383702 crio[4238]: time="2025-09-26 22:48:57.392597084Z" level=info msg="Image docker.io/mysql:5.7 not found" id=7077acc5-fbae-473e-bfce-3285115b1a6e name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:49:03 functional-383702 crio[4238]: time="2025-09-26 22:49:03.392494045Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=ea4c271f-d7cc-45c0-9136-3780c4ab183a name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:49:03 functional-383702 crio[4238]: time="2025-09-26 22:49:03.392873205Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=ea4c271f-d7cc-45c0-9136-3780c4ab183a name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:49:05 functional-383702 crio[4238]: time="2025-09-26 22:49:05.392282623Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=37594185-03d7-4bed-b0ae-25c53eda5043 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:49:05 functional-383702 crio[4238]: time="2025-09-26 22:49:05.392599340Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=37594185-03d7-4bed-b0ae-25c53eda5043 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:49:09 functional-383702 crio[4238]: time="2025-09-26 22:49:09.392307393Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=9a3237ea-0929-4551-afd3-30e9b6765fd4 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:49:09 functional-383702 crio[4238]: time="2025-09-26 22:49:09.392555459Z" level=info msg="Image docker.io/mysql:5.7 not found" id=9a3237ea-0929-4551-afd3-30e9b6765fd4 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:49:15 functional-383702 crio[4238]: time="2025-09-26 22:49:15.392702566Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=020e13f4-1169-4ee2-bbb9-793696d6f010 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 22:49:15 functional-383702 crio[4238]: time="2025-09-26 22:49:15.393055791Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=020e13f4-1169-4ee2-bbb9-793696d6f010 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4873b5fd9c1d2       docker.io/library/nginx@sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285       10 minutes ago      Running             myfrontend                0                   35232540f4a39       sp-pod
	92da9787b27c8       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   10 minutes ago      Exited              mount-munger              0                   986415bda1b4d       busybox-mount
	c7f3fb2ed6c31       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8       10 minutes ago      Running             nginx                     0                   8607d64b8e65b       nginx-svc
	f7052d19e3972       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Running             storage-provisioner       2                   33055bf5bc514       storage-provisioner
	0f1d90fff1994       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                      10 minutes ago      Running             kube-apiserver            0                   817512b32d33d       kube-apiserver-functional-383702
	9aad9441ea24b       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      10 minutes ago      Running             etcd                      1                   5a1d3696d5e69       etcd-functional-383702
	2f8f3416d803c       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      10 minutes ago      Running             kube-controller-manager   2                   6e3b19ba6bd5a       kube-controller-manager-functional-383702
	d002f125363d7       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      11 minutes ago      Exited              kube-controller-manager   1                   6e3b19ba6bd5a       kube-controller-manager-functional-383702
	f2b96981f3cea       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      11 minutes ago      Running             kube-scheduler            1                   e67e6055c8b97       kube-scheduler-functional-383702
	25c8780bc9df0       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      11 minutes ago      Running             kindnet-cni               1                   1bd0dce351379       kindnet-h9qvl
	71d7d7d7cb585       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 minutes ago      Exited              storage-provisioner       1                   33055bf5bc514       storage-provisioner
	649d32bc054df       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      11 minutes ago      Running             kube-proxy                1                   63ca8b4fd4bec       kube-proxy-27n4x
	8515d054eecd5       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 minutes ago      Running             coredns                   1                   189b543c8cd5c       coredns-66bc5c9577-sxzwb
	a720d09796fe8       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 minutes ago      Exited              coredns                   0                   189b543c8cd5c       coredns-66bc5c9577-sxzwb
	531d4b0a6adad       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      12 minutes ago      Exited              kindnet-cni               0                   1bd0dce351379       kindnet-h9qvl
	85c3ffe817ca8       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      12 minutes ago      Exited              kube-proxy                0                   63ca8b4fd4bec       kube-proxy-27n4x
	db73bb67b2a2d       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      12 minutes ago      Exited              kube-scheduler            0                   e67e6055c8b97       kube-scheduler-functional-383702
	f90cfaf912f69       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      12 minutes ago      Exited              etcd                      0                   5a1d3696d5e69       etcd-functional-383702
	
	
	==> coredns [8515d054eecd5a444f86dd4f43d164940d668d155f81dc6c68bb9d234a92876d] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46095 - 27369 "HINFO IN 2769917989759994095.5307631164563384989. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.021713353s
	
	
	==> coredns [a720d09796fe8c6300b07136f5a321c333362dd5a3c25385c7ee30aaf1d7ed90] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55613 - 20235 "HINFO IN 7109503854822832070.3156555241200074520. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.013759315s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-383702
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-383702
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47
	                    minikube.k8s.io/name=functional-383702
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_26T22_36_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 26 Sep 2025 22:36:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-383702
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 26 Sep 2025 22:49:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 26 Sep 2025 22:44:29 +0000   Fri, 26 Sep 2025 22:36:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 26 Sep 2025 22:44:29 +0000   Fri, 26 Sep 2025 22:36:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 26 Sep 2025 22:44:29 +0000   Fri, 26 Sep 2025 22:36:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 26 Sep 2025 22:44:29 +0000   Fri, 26 Sep 2025 22:37:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-383702
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 a0a02ced94d640b981affd2bc93c81c4
	  System UUID:                f593ccff-392d-4c4d-a0b7-5fd374fb4177
	  Boot ID:                    ce94279f-6745-4217-952d-f9fda1755c22
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-np6td                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-vmzsk           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-g2lbw                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-sxzwb                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-functional-383702                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-h9qvl                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-383702              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-383702     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-27n4x                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-383702              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-srn29    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-gqgkx         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-383702 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-383702 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-383702 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-383702 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-383702 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-383702 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           12m                node-controller  Node functional-383702 event: Registered Node functional-383702 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-383702 status is now: NodeReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-383702 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-383702 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-383702 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-383702 event: Registered Node functional-383702 in Controller
	
	
	==> dmesg <==
	[  +0.088607] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025515] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.894785] kauditd_printk_skb: 47 callbacks suppressed
	[Sep26 22:33] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5e 41 1c 6c 94 13 96 ca 2b d2 b8 98 08 00
	[  +1.003220] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 5e 41 1c 6c 94 13 96 ca 2b d2 b8 98 08 00
	[  +1.023896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 5e 41 1c 6c 94 13 96 ca 2b d2 b8 98 08 00
	[  +1.023850] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 5e 41 1c 6c 94 13 96 ca 2b d2 b8 98 08 00
	[  +1.023878] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 5e 41 1c 6c 94 13 96 ca 2b d2 b8 98 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 5e 41 1c 6c 94 13 96 ca 2b d2 b8 98 08 00
	[  +2.048746] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5e 41 1c 6c 94 13 96 ca 2b d2 b8 98 08 00
	[  +4.030628] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 5e 41 1c 6c 94 13 96 ca 2b d2 b8 98 08 00
	[  +8.319153] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 5e 41 1c 6c 94 13 96 ca 2b d2 b8 98 08 00
	[ +16.382271] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5e 41 1c 6c 94 13 96 ca 2b d2 b8 98 08 00
	[Sep26 22:34] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000033] ll header: 00000000: 5e 41 1c 6c 94 13 96 ca 2b d2 b8 98 08 00
	
	
	==> etcd [9aad9441ea24b0821d0b27d2b6f00a7097cfadb6fe6a12eef6ed624fbdd9b988] <==
	{"level":"warn","ts":"2025-09-26T22:38:22.176687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.182665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.189770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.196105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.202297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.209041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.216245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.222734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.228820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.236172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.242527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.249150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.256372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.262550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.268607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.275673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.282122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.288232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.305592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.311924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.320124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:38:22.365195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38824","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-26T22:48:21.903564Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1126}
	{"level":"info","ts":"2025-09-26T22:48:21.922953Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1126,"took":"19.033369ms","hash":2599250604,"current-db-size-bytes":3584000,"current-db-size":"3.6 MB","current-db-size-in-use-bytes":1748992,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-09-26T22:48:21.922999Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2599250604,"revision":1126,"compact-revision":-1}
	
	
	==> etcd [f90cfaf912f6987b18aac3393fb4cda3e0e222a40622257ee440fb60cd895054] <==
	{"level":"warn","ts":"2025-09-26T22:36:51.915127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:51.921316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:51.928043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:51.945530Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:51.951934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:51.958381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-26T22:36:52.006133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54854","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-26T22:38:18.020980Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-26T22:38:18.021074Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-383702","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-26T22:38:18.021184Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-26T22:38:18.022756Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-26T22:38:18.022817Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-26T22:38:18.022838Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-09-26T22:38:18.022889Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-26T22:38:18.022891Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-26T22:38:18.022962Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-26T22:38:18.023012Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-26T22:38:18.023020Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-26T22:38:18.022953Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-26T22:38:18.023043Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-26T22:38:18.023050Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-26T22:38:18.024946Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-26T22:38:18.025013Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-26T22:38:18.025051Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-26T22:38:18.025062Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-383702","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 22:49:16 up  2:31,  0 users,  load average: 0.15, 1.53, 20.35
	Linux functional-383702 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [25c8780bc9df0394418a978d4dcd37a1fc9e8a43e3e0f927e81a42e3af478801] <==
	I0926 22:47:08.653329       1 main.go:301] handling current node
	I0926 22:47:18.653219       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:47:18.653262       1 main.go:301] handling current node
	I0926 22:47:28.643826       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:47:28.643873       1 main.go:301] handling current node
	I0926 22:47:38.652191       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:47:38.652232       1 main.go:301] handling current node
	I0926 22:47:48.646172       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:47:48.646223       1 main.go:301] handling current node
	I0926 22:47:58.644296       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:47:58.644328       1 main.go:301] handling current node
	I0926 22:48:08.653476       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:48:08.653523       1 main.go:301] handling current node
	I0926 22:48:18.649299       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:48:18.649337       1 main.go:301] handling current node
	I0926 22:48:28.645183       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:48:28.645221       1 main.go:301] handling current node
	I0926 22:48:38.653237       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:48:38.653279       1 main.go:301] handling current node
	I0926 22:48:48.646161       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:48:48.646209       1 main.go:301] handling current node
	I0926 22:48:58.644605       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:48:58.644666       1 main.go:301] handling current node
	I0926 22:49:08.652747       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:49:08.652786       1 main.go:301] handling current node
	
	
	==> kindnet [531d4b0a6adad39d0c664b36894d865492c0c437bd84c8b98e737b8bc27b4ff6] <==
	I0926 22:37:01.223127       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0926 22:37:01.223432       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0926 22:37:01.223573       1 main.go:148] setting mtu 1500 for CNI 
	I0926 22:37:01.223592       1 main.go:178] kindnetd IP family: "ipv4"
	I0926 22:37:01.223622       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-26T22:37:01Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0926 22:37:01.428167       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0926 22:37:01.428250       1 controller.go:381] "Waiting for informer caches to sync"
	I0926 22:37:01.428668       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0926 22:37:01.429146       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0926 22:37:31.429573       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E0926 22:37:31.429578       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E0926 22:37:31.429577       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E0926 22:37:31.429631       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I0926 22:37:32.829571       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0926 22:37:32.829605       1 metrics.go:72] Registering metrics
	I0926 22:37:32.829662       1 controller.go:711] "Syncing nftables rules"
	I0926 22:37:41.436190       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:37:41.436257       1 main.go:301] handling current node
	I0926 22:37:51.435172       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:37:51.435218       1 main.go:301] handling current node
	I0926 22:38:01.432631       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0926 22:38:01.432669       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0f1d90fff19941eda0ac9f0e8c915241a757b6db2dcaf4db40398d4640877683] <==
	I0926 22:38:45.106292       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.98.198.196"}
	I0926 22:38:45.591413       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.101.254.0"}
	E0926 22:38:59.806926       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:50086: use of closed network connection
	E0926 22:39:07.782709       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:42046: use of closed network connection
	I0926 22:39:09.264496       1 controller.go:667] quota admission added evaluator for: namespaces
	I0926 22:39:09.389245       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.4.188"}
	I0926 22:39:09.400137       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.0.7"}
	I0926 22:39:14.899740       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.111.36.26"}
	I0926 22:39:27.279947       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:39:40.167657       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:40:41.816993       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:40:54.178554       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:41:56.347046       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:42:11.725080       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:43:20.281575       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:43:28.063892       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:44:26.231773       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:44:38.188877       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:45:43.192688       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:45:53.118801       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:47:01.411147       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:47:20.493630       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:48:17.207206       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 22:48:22.747326       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0926 22:48:37.901687       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [2f8f3416d803c18dee54a93529d5dfcf746f63352015dd5c8f9cc13d2fc5c6f1] <==
	I0926 22:38:26.145530       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0926 22:38:26.145566       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0926 22:38:26.147756       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0926 22:38:26.147787       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0926 22:38:26.148937       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0926 22:38:26.148966       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0926 22:38:26.149033       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0926 22:38:26.149047       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0926 22:38:26.149057       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0926 22:38:26.149115       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0926 22:38:26.149158       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0926 22:38:26.149508       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0926 22:38:26.149516       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0926 22:38:26.152521       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0926 22:38:26.153763       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0926 22:38:26.153764       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0926 22:38:26.157054       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0926 22:38:26.171128       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0926 22:38:26.186442       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0926 22:39:09.323398       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:39:09.327773       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:39:09.331324       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:39:09.334272       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:39:09.334831       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0926 22:39:09.339730       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [d002f125363d7cabaab809253fe8b16078d6f0a4a8a2cefc0f977363ea283a0c] <==
	I0926 22:38:09.246246       1 serving.go:386] Generated self-signed cert in-memory
	I0926 22:38:09.798602       1 controllermanager.go:191] "Starting" version="v1.34.0"
	I0926 22:38:09.798624       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:38:09.799874       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0926 22:38:09.799876       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0926 22:38:09.800159       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0926 22:38:09.800188       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0926 22:38:19.801762       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-proxy [649d32bc054dfc495e951045c142935673ec2afcf84fe1b7ac108730602f4073] <==
	I0926 22:38:08.407573       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0926 22:38:08.408843       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-383702&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:38:09.749483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-383702&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:38:11.860278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-383702&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:38:15.529555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-383702&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I0926 22:38:23.607794       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0926 22:38:23.607841       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0926 22:38:23.607947       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0926 22:38:23.627021       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0926 22:38:23.627103       1 server_linux.go:132] "Using iptables Proxier"
	I0926 22:38:23.632516       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0926 22:38:23.632955       1 server.go:527] "Version info" version="v1.34.0"
	I0926 22:38:23.632995       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:38:23.634342       1 config.go:403] "Starting serviceCIDR config controller"
	I0926 22:38:23.634364       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0926 22:38:23.634477       1 config.go:200] "Starting service config controller"
	I0926 22:38:23.634490       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0926 22:38:23.634487       1 config.go:309] "Starting node config controller"
	I0926 22:38:23.634502       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0926 22:38:23.634507       1 config.go:106] "Starting endpoint slice config controller"
	I0926 22:38:23.634511       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0926 22:38:23.634512       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0926 22:38:23.735461       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0926 22:38:23.735539       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0926 22:38:23.735488       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [85c3ffe817ca877caa2254a21d5dd25f802610ae24d4f2564968a7fef018106a] <==
	I0926 22:37:01.068434       1 server_linux.go:53] "Using iptables proxy"
	I0926 22:37:01.136995       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0926 22:37:01.237563       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0926 22:37:01.237606       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0926 22:37:01.237688       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0926 22:37:01.256792       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0926 22:37:01.256866       1 server_linux.go:132] "Using iptables Proxier"
	I0926 22:37:01.262250       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0926 22:37:01.262677       1 server.go:527] "Version info" version="v1.34.0"
	I0926 22:37:01.262733       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 22:37:01.263902       1 config.go:200] "Starting service config controller"
	I0926 22:37:01.263923       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0926 22:37:01.263948       1 config.go:403] "Starting serviceCIDR config controller"
	I0926 22:37:01.263974       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0926 22:37:01.263977       1 config.go:106] "Starting endpoint slice config controller"
	I0926 22:37:01.263984       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0926 22:37:01.264010       1 config.go:309] "Starting node config controller"
	I0926 22:37:01.264041       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0926 22:37:01.365066       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0926 22:37:01.365069       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0926 22:37:01.365120       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0926 22:37:01.365162       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [db73bb67b2a2d2365e200487e84eccc01f234727653f3c7874c52237af5df7da] <==
	E0926 22:36:52.625973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0926 22:36:52.626230       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0926 22:36:52.626667       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0926 22:36:52.626732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:36:52.626914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0926 22:36:52.626924       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0926 22:36:52.627021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0926 22:36:52.627041       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0926 22:36:52.627104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0926 22:36:52.627167       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0926 22:36:52.627355       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0926 22:36:52.627929       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0926 22:36:53.434654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0926 22:36:53.434658       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0926 22:36:53.464130       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0926 22:36:53.532612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0926 22:36:53.599951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0926 22:36:53.604071       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I0926 22:36:54.122224       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 22:38:07.736943       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 22:38:07.737060       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0926 22:38:07.737233       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0926 22:38:07.737328       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0926 22:38:07.737340       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0926 22:38:07.737367       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f2b96981f3ceaa989bd478a919e9a70994001f7bc68ddea7326c32df7f23c4e5] <==
	E0926 22:38:13.250399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0926 22:38:13.396218       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0926 22:38:13.541353       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0926 22:38:13.800508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0926 22:38:13.921958       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:38:15.534678       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0926 22:38:16.144669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0926 22:38:16.468679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0926 22:38:16.491112       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0926 22:38:16.514809       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0926 22:38:16.967225       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0926 22:38:17.016006       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0926 22:38:17.423400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0926 22:38:17.522224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0926 22:38:17.801647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0926 22:38:17.805313       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0926 22:38:17.818178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0926 22:38:18.341101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0926 22:38:18.393856       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0926 22:38:18.477682       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0926 22:38:18.626764       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0926 22:38:18.770550       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0926 22:38:18.886743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0926 22:38:19.557699       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I0926 22:38:24.811408       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 26 22:48:30 functional-383702 kubelet[5308]: E0926 22:48:30.500117    5308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758926910499770672  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:262385}  inodes_used:{value:116}}"
	Sep 26 22:48:37 functional-383702 kubelet[5308]: E0926 22:48:37.393623    5308 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-srn29" podUID="e65d8d59-9833-4c81-a988-0d16ac0b13b4"
	Sep 26 22:48:37 functional-383702 kubelet[5308]: E0926 22:48:37.393714    5308 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gqgkx" podUID="e8be3bc3-46b3-4f2e-9feb-7c9345cb6f97"
	Sep 26 22:48:40 functional-383702 kubelet[5308]: E0926 22:48:40.501397    5308 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758926920501213570  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:262385}  inodes_used:{value:116}}"
	Sep 26 22:48:40 functional-383702 kubelet[5308]: E0926 22:48:40.501428    5308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758926920501213570  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:262385}  inodes_used:{value:116}}"
	Sep 26 22:48:43 functional-383702 kubelet[5308]: E0926 22:48:43.392027    5308 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-vmzsk" podUID="740f15b9-277e-4139-b64b-8d2c055cafd5"
	Sep 26 22:48:44 functional-383702 kubelet[5308]: E0926 22:48:44.392006    5308 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-np6td" podUID="c75bf40e-9784-4212-8c9b-bea5b99acfeb"
	Sep 26 22:48:44 functional-383702 kubelet[5308]: E0926 22:48:44.392669    5308 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-g2lbw" podUID="7d547f76-64a5-412d-887e-fec4a84af02a"
	Sep 26 22:48:50 functional-383702 kubelet[5308]: E0926 22:48:50.503197    5308 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758926930502972477  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:262385}  inodes_used:{value:116}}"
	Sep 26 22:48:50 functional-383702 kubelet[5308]: E0926 22:48:50.503228    5308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758926930502972477  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:262385}  inodes_used:{value:116}}"
	Sep 26 22:48:51 functional-383702 kubelet[5308]: E0926 22:48:51.392918    5308 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-srn29" podUID="e65d8d59-9833-4c81-a988-0d16ac0b13b4"
	Sep 26 22:48:52 functional-383702 kubelet[5308]: E0926 22:48:52.393958    5308 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gqgkx" podUID="e8be3bc3-46b3-4f2e-9feb-7c9345cb6f97"
	Sep 26 22:48:55 functional-383702 kubelet[5308]: E0926 22:48:55.391944    5308 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-vmzsk" podUID="740f15b9-277e-4139-b64b-8d2c055cafd5"
	Sep 26 22:48:56 functional-383702 kubelet[5308]: E0926 22:48:56.392421    5308 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-np6td" podUID="c75bf40e-9784-4212-8c9b-bea5b99acfeb"
	Sep 26 22:48:57 functional-383702 kubelet[5308]: E0926 22:48:57.392975    5308 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-g2lbw" podUID="7d547f76-64a5-412d-887e-fec4a84af02a"
	Sep 26 22:49:00 functional-383702 kubelet[5308]: E0926 22:49:00.505330    5308 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758926940505075344  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:262385}  inodes_used:{value:116}}"
	Sep 26 22:49:00 functional-383702 kubelet[5308]: E0926 22:49:00.505369    5308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758926940505075344  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:262385}  inodes_used:{value:116}}"
	Sep 26 22:49:03 functional-383702 kubelet[5308]: E0926 22:49:03.393279    5308 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-srn29" podUID="e65d8d59-9833-4c81-a988-0d16ac0b13b4"
	Sep 26 22:49:05 functional-383702 kubelet[5308]: E0926 22:49:05.393065    5308 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gqgkx" podUID="e8be3bc3-46b3-4f2e-9feb-7c9345cb6f97"
	Sep 26 22:49:08 functional-383702 kubelet[5308]: E0926 22:49:08.391803    5308 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-vmzsk" podUID="740f15b9-277e-4139-b64b-8d2c055cafd5"
	Sep 26 22:49:09 functional-383702 kubelet[5308]: E0926 22:49:09.392900    5308 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-g2lbw" podUID="7d547f76-64a5-412d-887e-fec4a84af02a"
	Sep 26 22:49:10 functional-383702 kubelet[5308]: E0926 22:49:10.507583    5308 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758926950507357693  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:262385}  inodes_used:{value:116}}"
	Sep 26 22:49:10 functional-383702 kubelet[5308]: E0926 22:49:10.507615    5308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758926950507357693  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:262385}  inodes_used:{value:116}}"
	Sep 26 22:49:11 functional-383702 kubelet[5308]: E0926 22:49:11.391915    5308 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-np6td" podUID="c75bf40e-9784-4212-8c9b-bea5b99acfeb"
	Sep 26 22:49:15 functional-383702 kubelet[5308]: E0926 22:49:15.393527    5308 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-srn29" podUID="e65d8d59-9833-4c81-a988-0d16ac0b13b4"
	
	
	==> storage-provisioner [71d7d7d7cb58555a95d1e8fe6617067b351970ff70ccde0f92ad7463b973bef0] <==
	I0926 22:38:08.310387       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0926 22:38:08.313832       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [f7052d19e3972521a34090086debaad98ed1becd14bad4a55e19bd8957f1e02f] <==
	W0926 22:48:51.572007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:48:53.575677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:48:53.580716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:48:55.583876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:48:55.588658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:48:57.591812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:48:57.596646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:48:59.600034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:48:59.603853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:49:01.606959       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:49:01.611220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:49:03.614360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:49:03.618298       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:49:05.621620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:49:05.625557       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:49:07.628937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:49:07.634187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:49:09.637060       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:49:09.642246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:49:11.645022       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:49:11.649913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:49:13.652853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:49:13.656486       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:49:15.659808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 22:49:15.663982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-383702 -n functional-383702
helpers_test.go:269: (dbg) Run:  kubectl --context functional-383702 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-np6td hello-node-connect-7d85dfc575-vmzsk mysql-5bb876957f-g2lbw dashboard-metrics-scraper-77bf4d6c4c-srn29 kubernetes-dashboard-855c9754f9-gqgkx
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-383702 describe pod busybox-mount hello-node-75c85bcc94-np6td hello-node-connect-7d85dfc575-vmzsk mysql-5bb876957f-g2lbw dashboard-metrics-scraper-77bf4d6c4c-srn29 kubernetes-dashboard-855c9754f9-gqgkx
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-383702 describe pod busybox-mount hello-node-75c85bcc94-np6td hello-node-connect-7d85dfc575-vmzsk mysql-5bb876957f-g2lbw dashboard-metrics-scraper-77bf4d6c4c-srn29 kubernetes-dashboard-855c9754f9-gqgkx: exit status 1 (85.344118ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-383702/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:38:59 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://92da9787b27c88237d1d1d6551b3b2591365045a9e2071cbf62dfd489bb0e804
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 26 Sep 2025 22:39:02 +0000
	      Finished:     Fri, 26 Sep 2025 22:39:02 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wwhqw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-wwhqw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-383702
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.368s (2.368s including waiting). Image size: 4631262 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-np6td
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-383702/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:38:43 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6nqhv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-6nqhv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-np6td to functional-383702
	  Normal   Pulling    4m51s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     3m53s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     3m53s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    33s (x23 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     33s (x23 over 10m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-vmzsk
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-383702/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:38:45 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vgr9t (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vgr9t:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-vmzsk to functional-383702
	  Normal   Pulling    4m55s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     3m53s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     3m53s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    22s (x24 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     22s (x24 over 10m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             mysql-5bb876957f-g2lbw
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-383702/192.168.49.2
	Start Time:       Fri, 26 Sep 2025 22:39:14 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.12
	IPs:
	  IP:           10.244.0.12
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g55h4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-g55h4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  10m                 default-scheduler  Successfully assigned default/mysql-5bb876957f-g2lbw to functional-383702
	  Warning  Failed     4m23s               kubelet            Failed to pull image "docker.io/mysql:5.7": loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    92s (x5 over 10m)   kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     49s (x4 over 8m7s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     49s (x5 over 8m7s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    8s (x13 over 8m7s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     8s (x13 over 8m7s)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-srn29" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-gqgkx" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-383702 describe pod busybox-mount hello-node-75c85bcc94-np6td hello-node-connect-7d85dfc575-vmzsk mysql-5bb876957f-g2lbw dashboard-metrics-scraper-77bf4d6c4c-srn29 kubernetes-dashboard-855c9754f9-gqgkx: exit status 1
--- FAIL: TestFunctional/parallel/MySQL (602.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-383702 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-383702 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-np6td" [c75bf40e-9784-4212-8c9b-bea5b99acfeb] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-383702 -n functional-383702
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-09-26 22:48:44.003481612 +0000 UTC m=+1161.913417878
functional_test.go:1460: (dbg) Run:  kubectl --context functional-383702 describe po hello-node-75c85bcc94-np6td -n default
functional_test.go:1460: (dbg) kubectl --context functional-383702 describe po hello-node-75c85bcc94-np6td -n default:
Name:             hello-node-75c85bcc94-np6td
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-383702/192.168.49.2
Start Time:       Fri, 26 Sep 2025 22:38:43 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6nqhv (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-6nqhv:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-np6td to functional-383702
Normal   Pulling    4m18s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     3m20s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     3m20s (x5 over 10m)  kubelet            Error: ErrImagePull
Warning  Failed     2m4s (x16 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    61s (x21 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-383702 logs hello-node-75c85bcc94-np6td -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-383702 logs hello-node-75c85bcc94-np6td -n default: exit status 1 (69.458973ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-np6td" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-383702 logs hello-node-75c85bcc94-np6td -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-383702 service --namespace=default --https --url hello-node: exit status 115 (550.327017ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30139
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-383702 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-383702 service hello-node --url --format={{.IP}}: exit status 115 (541.808662ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-383702 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-383702 service hello-node --url: exit status 115 (526.90743ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30139
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-383702 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30139
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (925.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-227717 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-227717 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: exit status 80 (15m25.641033436s)

                                                
                                                
-- stdout --
	* [calico-227717] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21642-208519/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-208519/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "calico-227717" primary control-plane node in "calico-227717" cluster
	* Pulling base image v0.0.48 ...
	* Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 23:19:27.478474  502638 out.go:360] Setting OutFile to fd 1 ...
	I0926 23:19:27.478599  502638 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:19:27.478612  502638 out.go:374] Setting ErrFile to fd 2...
	I0926 23:19:27.478619  502638 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:19:27.478841  502638 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-208519/.minikube/bin
	I0926 23:19:27.479412  502638 out.go:368] Setting JSON to false
	I0926 23:19:27.480803  502638 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":10916,"bootTime":1758917851,"procs":325,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 23:19:27.480903  502638 start.go:140] virtualization: kvm guest
	I0926 23:19:27.482872  502638 out.go:179] * [calico-227717] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0926 23:19:27.484393  502638 out.go:179]   - MINIKUBE_LOCATION=21642
	I0926 23:19:27.484393  502638 notify.go:220] Checking for updates...
	I0926 23:19:27.485458  502638 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 23:19:27.486874  502638 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21642-208519/kubeconfig
	I0926 23:19:27.488262  502638 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-208519/.minikube
	I0926 23:19:27.489611  502638 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0926 23:19:27.490857  502638 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 23:19:27.492486  502638 config.go:182] Loaded profile config "auto-227717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:19:27.492592  502638 config.go:182] Loaded profile config "default-k8s-diff-port-441435": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:19:27.492674  502638 config.go:182] Loaded profile config "kindnet-227717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:19:27.492785  502638 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 23:19:27.516837  502638 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0926 23:19:27.516976  502638 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 23:19:27.575900  502638 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-26 23:19:27.563446601 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 23:19:27.576056  502638 docker.go:318] overlay module found
	I0926 23:19:27.578240  502638 out.go:179] * Using the docker driver based on user configuration
	I0926 23:19:27.579533  502638 start.go:304] selected driver: docker
	I0926 23:19:27.579547  502638 start.go:924] validating driver "docker" against <nil>
	I0926 23:19:27.579559  502638 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 23:19:27.580194  502638 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 23:19:27.636861  502638 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-26 23:19:27.626530674 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 23:19:27.637022  502638 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0926 23:19:27.637258  502638 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 23:19:27.638980  502638 out.go:179] * Using Docker driver with root privileges
	I0926 23:19:27.640094  502638 cni.go:84] Creating CNI manager for "calico"
	I0926 23:19:27.640117  502638 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I0926 23:19:27.640204  502638 start.go:348] cluster config:
	{Name:calico-227717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-227717 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterva
l:1m0s}
	I0926 23:19:27.641373  502638 out.go:179] * Starting "calico-227717" primary control-plane node in "calico-227717" cluster
	I0926 23:19:27.642478  502638 cache.go:123] Beginning downloading kic base image for docker with crio
	I0926 23:19:27.643670  502638 out.go:179] * Pulling base image v0.0.48 ...
	I0926 23:19:27.644822  502638 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0926 23:19:27.644863  502638 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21642-208519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0926 23:19:27.644873  502638 cache.go:58] Caching tarball of preloaded images
	I0926 23:19:27.644928  502638 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0926 23:19:27.644978  502638 preload.go:172] Found /home/jenkins/minikube-integration/21642-208519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0926 23:19:27.644993  502638 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0926 23:19:27.645131  502638 profile.go:143] Saving config to /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/calico-227717/config.json ...
	I0926 23:19:27.645157  502638 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/calico-227717/config.json: {Name:mk5b0ae01c7eb4c03f59faf6f59b1e4817e3d362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:19:27.666250  502638 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0926 23:19:27.666271  502638 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0926 23:19:27.666287  502638 cache.go:232] Successfully downloaded all kic artifacts
	I0926 23:19:27.666351  502638 start.go:360] acquireMachinesLock for calico-227717: {Name:mk9296169eee2a4ebcc41438bee257c85d680556 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 23:19:27.666473  502638 start.go:364] duration metric: took 101.86µs to acquireMachinesLock for "calico-227717"
	I0926 23:19:27.666506  502638 start.go:93] Provisioning new machine with config: &{Name:calico-227717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-227717 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0926 23:19:27.666602  502638 start.go:125] createHost starting for "" (driver="docker")
	I0926 23:19:27.668458  502638 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0926 23:19:27.668760  502638 start.go:159] libmachine.API.Create for "calico-227717" (driver="docker")
	I0926 23:19:27.668799  502638 client.go:168] LocalClient.Create starting
	I0926 23:19:27.668891  502638 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21642-208519/.minikube/certs/ca.pem
	I0926 23:19:27.668941  502638 main.go:141] libmachine: Decoding PEM data...
	I0926 23:19:27.668962  502638 main.go:141] libmachine: Parsing certificate...
	I0926 23:19:27.669046  502638 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21642-208519/.minikube/certs/cert.pem
	I0926 23:19:27.669077  502638 main.go:141] libmachine: Decoding PEM data...
	I0926 23:19:27.669132  502638 main.go:141] libmachine: Parsing certificate...
	I0926 23:19:27.669605  502638 cli_runner.go:164] Run: docker network inspect calico-227717 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0926 23:19:27.687593  502638 cli_runner.go:211] docker network inspect calico-227717 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0926 23:19:27.687678  502638 network_create.go:284] running [docker network inspect calico-227717] to gather additional debugging logs...
	I0926 23:19:27.687703  502638 cli_runner.go:164] Run: docker network inspect calico-227717
	W0926 23:19:27.703818  502638 cli_runner.go:211] docker network inspect calico-227717 returned with exit code 1
	I0926 23:19:27.703852  502638 network_create.go:287] error running [docker network inspect calico-227717]: docker network inspect calico-227717: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-227717 not found
	I0926 23:19:27.703878  502638 network_create.go:289] output of [docker network inspect calico-227717]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-227717 not found
	
	** /stderr **
	I0926 23:19:27.703990  502638 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0926 23:19:27.721579  502638 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-61b47db54300 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:16:5a:0f:e5:da:60} reservation:<nil>}
	I0926 23:19:27.722118  502638 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-d81bcc6cb1d8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:92:9e:9a:18:c3:8e} reservation:<nil>}
	I0926 23:19:27.722622  502638 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b6dea4b9b493 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:62:a1:51:b0:46:1c} reservation:<nil>}
	I0926 23:19:27.722916  502638 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-01717b46a4b4 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:aa:57:d2:c6:c9:9b} reservation:<nil>}
	I0926 23:19:27.723542  502638 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-67d4b013c4d0 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ce:8f:12:6f:0a:34} reservation:<nil>}
	I0926 23:19:27.724199  502638 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-7f3a1c78f885 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:a2:2d:c0:44:67:eb} reservation:<nil>}
	I0926 23:19:27.724939  502638 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ce1110}
	I0926 23:19:27.724964  502638 network_create.go:124] attempt to create docker network calico-227717 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I0926 23:19:27.725030  502638 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-227717 calico-227717
	I0926 23:19:27.786015  502638 network_create.go:108] docker network calico-227717 192.168.103.0/24 created
	I0926 23:19:27.786045  502638 kic.go:121] calculated static IP "192.168.103.2" for the "calico-227717" container
	I0926 23:19:27.786155  502638 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0926 23:19:27.803740  502638 cli_runner.go:164] Run: docker volume create calico-227717 --label name.minikube.sigs.k8s.io=calico-227717 --label created_by.minikube.sigs.k8s.io=true
	I0926 23:19:27.824401  502638 oci.go:103] Successfully created a docker volume calico-227717
	I0926 23:19:27.824492  502638 cli_runner.go:164] Run: docker run --rm --name calico-227717-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-227717 --entrypoint /usr/bin/test -v calico-227717:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0926 23:19:28.204367  502638 oci.go:107] Successfully prepared a docker volume calico-227717
	I0926 23:19:28.204415  502638 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0926 23:19:28.204444  502638 kic.go:194] Starting extracting preloaded images to volume ...
	I0926 23:19:28.204531  502638 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21642-208519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-227717:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0926 23:19:32.443432  502638 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21642-208519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-227717:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.238839322s)
	I0926 23:19:32.443470  502638 kic.go:203] duration metric: took 4.239023731s to extract preloaded images to volume ...
	W0926 23:19:32.443548  502638 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0926 23:19:32.443574  502638 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0926 23:19:32.443611  502638 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0926 23:19:32.503462  502638 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-227717 --name calico-227717 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-227717 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-227717 --network calico-227717 --ip 192.168.103.2 --volume calico-227717:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0926 23:19:32.783252  502638 cli_runner.go:164] Run: docker container inspect calico-227717 --format={{.State.Running}}
	I0926 23:19:32.800806  502638 cli_runner.go:164] Run: docker container inspect calico-227717 --format={{.State.Status}}
	I0926 23:19:32.819462  502638 cli_runner.go:164] Run: docker exec calico-227717 stat /var/lib/dpkg/alternatives/iptables
	I0926 23:19:32.863746  502638 oci.go:144] the created container "calico-227717" has a running status.
	I0926 23:19:32.863785  502638 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21642-208519/.minikube/machines/calico-227717/id_rsa...
	I0926 23:19:33.130438  502638 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21642-208519/.minikube/machines/calico-227717/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0926 23:19:33.157384  502638 cli_runner.go:164] Run: docker container inspect calico-227717 --format={{.State.Status}}
	I0926 23:19:33.177209  502638 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0926 23:19:33.177234  502638 kic_runner.go:114] Args: [docker exec --privileged calico-227717 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0926 23:19:33.223927  502638 cli_runner.go:164] Run: docker container inspect calico-227717 --format={{.State.Status}}
	I0926 23:19:33.243642  502638 machine.go:93] provisionDockerMachine start ...
	I0926 23:19:33.243794  502638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-227717
	I0926 23:19:33.262931  502638 main.go:141] libmachine: Using SSH client type: native
	I0926 23:19:33.263269  502638 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I0926 23:19:33.263288  502638 main.go:141] libmachine: About to run SSH command:
	hostname
	I0926 23:19:33.401451  502638 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-227717
	
	I0926 23:19:33.401481  502638 ubuntu.go:182] provisioning hostname "calico-227717"
	I0926 23:19:33.401555  502638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-227717
	I0926 23:19:33.419406  502638 main.go:141] libmachine: Using SSH client type: native
	I0926 23:19:33.419636  502638 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I0926 23:19:33.419651  502638 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-227717 && echo "calico-227717" | sudo tee /etc/hostname
	I0926 23:19:33.572123  502638 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-227717
	
	I0926 23:19:33.572206  502638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-227717
	I0926 23:19:33.590281  502638 main.go:141] libmachine: Using SSH client type: native
	I0926 23:19:33.590583  502638 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I0926 23:19:33.590618  502638 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-227717' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-227717/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-227717' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0926 23:19:33.728029  502638 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 23:19:33.728066  502638 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21642-208519/.minikube CaCertPath:/home/jenkins/minikube-integration/21642-208519/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21642-208519/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21642-208519/.minikube}
	I0926 23:19:33.728110  502638 ubuntu.go:190] setting up certificates
	I0926 23:19:33.728122  502638 provision.go:84] configureAuth start
	I0926 23:19:33.728173  502638 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-227717
	I0926 23:19:33.746192  502638 provision.go:143] copyHostCerts
	I0926 23:19:33.746260  502638 exec_runner.go:144] found /home/jenkins/minikube-integration/21642-208519/.minikube/ca.pem, removing ...
	I0926 23:19:33.746274  502638 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21642-208519/.minikube/ca.pem
	I0926 23:19:33.746364  502638 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-208519/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21642-208519/.minikube/ca.pem (1078 bytes)
	I0926 23:19:33.746472  502638 exec_runner.go:144] found /home/jenkins/minikube-integration/21642-208519/.minikube/cert.pem, removing ...
	I0926 23:19:33.746484  502638 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21642-208519/.minikube/cert.pem
	I0926 23:19:33.746514  502638 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-208519/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21642-208519/.minikube/cert.pem (1123 bytes)
	I0926 23:19:33.746575  502638 exec_runner.go:144] found /home/jenkins/minikube-integration/21642-208519/.minikube/key.pem, removing ...
	I0926 23:19:33.746582  502638 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21642-208519/.minikube/key.pem
	I0926 23:19:33.746606  502638 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-208519/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21642-208519/.minikube/key.pem (1675 bytes)
	I0926 23:19:33.746657  502638 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21642-208519/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21642-208519/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21642-208519/.minikube/certs/ca-key.pem org=jenkins.calico-227717 san=[127.0.0.1 192.168.103.2 calico-227717 localhost minikube]
	I0926 23:19:34.006212  502638 provision.go:177] copyRemoteCerts
	I0926 23:19:34.006286  502638 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 23:19:34.006338  502638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-227717
	I0926 23:19:34.024697  502638 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/calico-227717/id_rsa Username:docker}
	I0926 23:19:34.123195  502638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0926 23:19:34.151631  502638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0926 23:19:34.177910  502638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0926 23:19:34.205871  502638 provision.go:87] duration metric: took 477.735055ms to configureAuth
	I0926 23:19:34.205899  502638 ubuntu.go:206] setting minikube options for container-runtime
	I0926 23:19:34.206057  502638 config.go:182] Loaded profile config "calico-227717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:19:34.206200  502638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-227717
	I0926 23:19:34.225589  502638 main.go:141] libmachine: Using SSH client type: native
	I0926 23:19:34.225824  502638 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I0926 23:19:34.225849  502638 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0926 23:19:34.465555  502638 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0926 23:19:34.465585  502638 machine.go:96] duration metric: took 1.221912394s to provisionDockerMachine
	I0926 23:19:34.465597  502638 client.go:171] duration metric: took 6.796792036s to LocalClient.Create
	I0926 23:19:34.465618  502638 start.go:167] duration metric: took 6.796862229s to libmachine.API.Create "calico-227717"
	I0926 23:19:34.465627  502638 start.go:293] postStartSetup for "calico-227717" (driver="docker")
	I0926 23:19:34.465641  502638 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 23:19:34.465718  502638 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 23:19:34.465759  502638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-227717
	I0926 23:19:34.483755  502638 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/calico-227717/id_rsa Username:docker}
	I0926 23:19:34.585197  502638 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 23:19:34.588897  502638 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0926 23:19:34.588925  502638 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0926 23:19:34.588932  502638 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0926 23:19:34.588939  502638 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0926 23:19:34.588950  502638 filesync.go:126] Scanning /home/jenkins/minikube-integration/21642-208519/.minikube/addons for local assets ...
	I0926 23:19:34.589002  502638 filesync.go:126] Scanning /home/jenkins/minikube-integration/21642-208519/.minikube/files for local assets ...
	I0926 23:19:34.589115  502638 filesync.go:149] local asset: /home/jenkins/minikube-integration/21642-208519/.minikube/files/etc/ssl/certs/2121372.pem -> 2121372.pem in /etc/ssl/certs
	I0926 23:19:34.589213  502638 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0926 23:19:34.599490  502638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/files/etc/ssl/certs/2121372.pem --> /etc/ssl/certs/2121372.pem (1708 bytes)
	I0926 23:19:34.628668  502638 start.go:296] duration metric: took 163.024554ms for postStartSetup
	I0926 23:19:34.629015  502638 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-227717
	I0926 23:19:34.646126  502638 profile.go:143] Saving config to /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/calico-227717/config.json ...
	I0926 23:19:34.646415  502638 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0926 23:19:34.646471  502638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-227717
	I0926 23:19:34.664584  502638 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/calico-227717/id_rsa Username:docker}
	I0926 23:19:34.758406  502638 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0926 23:19:34.763286  502638 start.go:128] duration metric: took 7.096662275s to createHost
	I0926 23:19:34.763318  502638 start.go:83] releasing machines lock for "calico-227717", held for 7.096829689s
	I0926 23:19:34.763388  502638 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-227717
	I0926 23:19:34.781414  502638 ssh_runner.go:195] Run: cat /version.json
	I0926 23:19:34.781473  502638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-227717
	I0926 23:19:34.781512  502638 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0926 23:19:34.781585  502638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-227717
	I0926 23:19:34.800238  502638 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/calico-227717/id_rsa Username:docker}
	I0926 23:19:34.800646  502638 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/calico-227717/id_rsa Username:docker}
	I0926 23:19:34.968578  502638 ssh_runner.go:195] Run: systemctl --version
	I0926 23:19:34.974065  502638 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0926 23:19:35.117416  502638 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0926 23:19:35.122984  502638 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 23:19:35.147520  502638 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0926 23:19:35.147602  502638 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 23:19:35.178595  502638 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0926 23:19:35.178624  502638 start.go:495] detecting cgroup driver to use...
	I0926 23:19:35.178657  502638 detect.go:190] detected "systemd" cgroup driver on host os
	I0926 23:19:35.178721  502638 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0926 23:19:35.195555  502638 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 23:19:35.208664  502638 docker.go:218] disabling cri-docker service (if available) ...
	I0926 23:19:35.208716  502638 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0926 23:19:35.223668  502638 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0926 23:19:35.239548  502638 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0926 23:19:35.311375  502638 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0926 23:19:35.383722  502638 docker.go:234] disabling docker service ...
	I0926 23:19:35.383795  502638 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0926 23:19:35.403714  502638 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0926 23:19:35.416257  502638 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0926 23:19:35.488861  502638 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0926 23:19:35.670381  502638 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0926 23:19:35.682821  502638 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 23:19:35.700909  502638 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0926 23:19:35.700965  502638 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:19:35.714280  502638 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0926 23:19:35.714350  502638 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:19:35.724948  502638 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:19:35.735987  502638 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:19:35.747613  502638 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 23:19:35.758065  502638 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:19:35.768882  502638 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:19:35.786864  502638 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:19:35.797619  502638 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 23:19:35.806664  502638 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 23:19:35.815972  502638 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 23:19:35.883722  502638 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0926 23:19:35.985370  502638 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0926 23:19:35.985433  502638 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0926 23:19:35.989401  502638 start.go:563] Will wait 60s for crictl version
	I0926 23:19:35.989452  502638 ssh_runner.go:195] Run: which crictl
	I0926 23:19:35.993288  502638 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0926 23:19:36.030068  502638 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0926 23:19:36.030191  502638 ssh_runner.go:195] Run: crio --version
	I0926 23:19:36.068038  502638 ssh_runner.go:195] Run: crio --version
	I0926 23:19:36.108956  502638 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0926 23:19:36.110199  502638 cli_runner.go:164] Run: docker network inspect calico-227717 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0926 23:19:36.127871  502638 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0926 23:19:36.132011  502638 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 23:19:36.144289  502638 kubeadm.go:883] updating cluster {Name:calico-227717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-227717 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0926 23:19:36.144416  502638 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0926 23:19:36.144479  502638 ssh_runner.go:195] Run: sudo crictl images --output json
	I0926 23:19:36.215703  502638 crio.go:514] all images are preloaded for cri-o runtime.
	I0926 23:19:36.215730  502638 crio.go:433] Images already preloaded, skipping extraction
	I0926 23:19:36.215789  502638 ssh_runner.go:195] Run: sudo crictl images --output json
	I0926 23:19:36.251729  502638 crio.go:514] all images are preloaded for cri-o runtime.
	I0926 23:19:36.251750  502638 cache_images.go:85] Images are preloaded, skipping loading
	I0926 23:19:36.251760  502638 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.0 crio true true} ...
	I0926 23:19:36.251859  502638 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=calico-227717 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:calico-227717 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I0926 23:19:36.251932  502638 ssh_runner.go:195] Run: crio config
	I0926 23:19:36.296667  502638 cni.go:84] Creating CNI manager for "calico"
	I0926 23:19:36.296704  502638 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0926 23:19:36.296729  502638 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-227717 NodeName:calico-227717 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0926 23:19:36.296903  502638 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-227717"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0926 23:19:36.296968  502638 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0926 23:19:36.307334  502638 binaries.go:44] Found k8s binaries, skipping transfer
	I0926 23:19:36.307411  502638 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0926 23:19:36.317749  502638 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I0926 23:19:36.337516  502638 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0926 23:19:36.360668  502638 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I0926 23:19:36.379837  502638 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0926 23:19:36.383548  502638 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 23:19:36.395319  502638 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 23:19:36.464664  502638 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 23:19:36.487613  502638 certs.go:69] Setting up /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/calico-227717 for IP: 192.168.103.2
	I0926 23:19:36.487633  502638 certs.go:195] generating shared ca certs ...
	I0926 23:19:36.487648  502638 certs.go:227] acquiring lock for ca certs: {Name:mk7fa2bdff33a744d301294affc1d74bea26e4b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:19:36.487790  502638 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21642-208519/.minikube/ca.key
	I0926 23:19:36.487835  502638 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21642-208519/.minikube/proxy-client-ca.key
	I0926 23:19:36.487845  502638 certs.go:257] generating profile certs ...
	I0926 23:19:36.487903  502638 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/calico-227717/client.key
	I0926 23:19:36.487915  502638 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/calico-227717/client.crt with IP's: []
	I0926 23:19:36.591022  502638 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/calico-227717/client.crt ...
	I0926 23:19:36.591060  502638 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/calico-227717/client.crt: {Name:mkbc7885d791b47b9c72705755e453b94cc583b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:19:36.591296  502638 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/calico-227717/client.key ...
	I0926 23:19:36.591318  502638 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/calico-227717/client.key: {Name:mk1b4d519ebf4a5a1e69405152c45f4ce65da73e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:19:36.591460  502638 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/calico-227717/apiserver.key.03b1017e
	I0926 23:19:36.591483  502638 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/calico-227717/apiserver.crt.03b1017e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I0926 23:19:36.759448  502638 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/calico-227717/apiserver.crt.03b1017e ...
	I0926 23:19:36.759480  502638 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/calico-227717/apiserver.crt.03b1017e: {Name:mk5c5f9201c99994ab452c309c856965b872422f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:19:36.759677  502638 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/calico-227717/apiserver.key.03b1017e ...
	I0926 23:19:36.759701  502638 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/calico-227717/apiserver.key.03b1017e: {Name:mk4a820febaf311fe203587245423f432d6ebef6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:19:36.759825  502638 certs.go:382] copying /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/calico-227717/apiserver.crt.03b1017e -> /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/calico-227717/apiserver.crt
	I0926 23:19:36.759929  502638 certs.go:386] copying /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/calico-227717/apiserver.key.03b1017e -> /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/calico-227717/apiserver.key
	I0926 23:19:36.760018  502638 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/calico-227717/proxy-client.key
	I0926 23:19:36.760045  502638 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/calico-227717/proxy-client.crt with IP's: []
	I0926 23:19:36.962940  502638 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/calico-227717/proxy-client.crt ...
	I0926 23:19:36.962980  502638 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/calico-227717/proxy-client.crt: {Name:mkc0574373b71d07354babe1b74924cbced1e228 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:19:36.963205  502638 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/calico-227717/proxy-client.key ...
	I0926 23:19:36.963229  502638 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/calico-227717/proxy-client.key: {Name:mk7833ee384e01f0c1f67d431b3da3ec119cc15a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:19:36.963472  502638 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-208519/.minikube/certs/212137.pem (1338 bytes)
	W0926 23:19:36.963520  502638 certs.go:480] ignoring /home/jenkins/minikube-integration/21642-208519/.minikube/certs/212137_empty.pem, impossibly tiny 0 bytes
	I0926 23:19:36.963536  502638 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-208519/.minikube/certs/ca-key.pem (1679 bytes)
	I0926 23:19:36.963565  502638 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-208519/.minikube/certs/ca.pem (1078 bytes)
	I0926 23:19:36.963597  502638 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-208519/.minikube/certs/cert.pem (1123 bytes)
	I0926 23:19:36.963630  502638 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-208519/.minikube/certs/key.pem (1675 bytes)
	I0926 23:19:36.963690  502638 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-208519/.minikube/files/etc/ssl/certs/2121372.pem (1708 bytes)
	I0926 23:19:36.964329  502638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0926 23:19:36.993793  502638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0926 23:19:37.020584  502638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0926 23:19:37.046261  502638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0926 23:19:37.073794  502638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/calico-227717/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0926 23:19:37.100668  502638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/calico-227717/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0926 23:19:37.128541  502638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/calico-227717/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0926 23:19:37.154327  502638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/calico-227717/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0926 23:19:37.180900  502638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/files/etc/ssl/certs/2121372.pem --> /usr/share/ca-certificates/2121372.pem (1708 bytes)
	I0926 23:19:37.211664  502638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0926 23:19:37.239478  502638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/certs/212137.pem --> /usr/share/ca-certificates/212137.pem (1338 bytes)
	I0926 23:19:37.266857  502638 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0926 23:19:37.286861  502638 ssh_runner.go:195] Run: openssl version
	I0926 23:19:37.292508  502638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2121372.pem && ln -fs /usr/share/ca-certificates/2121372.pem /etc/ssl/certs/2121372.pem"
	I0926 23:19:37.303476  502638 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2121372.pem
	I0926 23:19:37.307440  502638 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 26 22:36 /usr/share/ca-certificates/2121372.pem
	I0926 23:19:37.307492  502638 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2121372.pem
	I0926 23:19:37.314885  502638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2121372.pem /etc/ssl/certs/3ec20f2e.0"
	I0926 23:19:37.324953  502638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0926 23:19:37.335080  502638 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0926 23:19:37.338908  502638 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 26 22:29 /usr/share/ca-certificates/minikubeCA.pem
	I0926 23:19:37.338973  502638 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0926 23:19:37.345959  502638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0926 23:19:37.356301  502638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/212137.pem && ln -fs /usr/share/ca-certificates/212137.pem /etc/ssl/certs/212137.pem"
	I0926 23:19:37.367392  502638 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/212137.pem
	I0926 23:19:37.371340  502638 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 26 22:36 /usr/share/ca-certificates/212137.pem
	I0926 23:19:37.371405  502638 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/212137.pem
	I0926 23:19:37.379251  502638 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/212137.pem /etc/ssl/certs/51391683.0"
	I0926 23:19:37.389563  502638 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0926 23:19:37.393339  502638 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0926 23:19:37.393403  502638 kubeadm.go:400] StartCluster: {Name:calico-227717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-227717 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Soc
ketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 23:19:37.393497  502638 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0926 23:19:37.393551  502638 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0926 23:19:37.432519  502638 cri.go:89] found id: ""
	I0926 23:19:37.432582  502638 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0926 23:19:37.442654  502638 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0926 23:19:37.452342  502638 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0926 23:19:37.452407  502638 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0926 23:19:37.462284  502638 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0926 23:19:37.462302  502638 kubeadm.go:157] found existing configuration files:
	
	I0926 23:19:37.462354  502638 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0926 23:19:37.471874  502638 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0926 23:19:37.471930  502638 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0926 23:19:37.482503  502638 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0926 23:19:37.492540  502638 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0926 23:19:37.492609  502638 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0926 23:19:37.501617  502638 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0926 23:19:37.511052  502638 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0926 23:19:37.511117  502638 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0926 23:19:37.520294  502638 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0926 23:19:37.529468  502638 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0926 23:19:37.529520  502638 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0926 23:19:37.538417  502638 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0926 23:19:37.577678  502638 kubeadm.go:318] [init] Using Kubernetes version: v1.34.0
	I0926 23:19:37.577793  502638 kubeadm.go:318] [preflight] Running pre-flight checks
	I0926 23:19:37.594553  502638 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I0926 23:19:37.594628  502638 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1040-gcp
	I0926 23:19:37.594667  502638 kubeadm.go:318] OS: Linux
	I0926 23:19:37.594736  502638 kubeadm.go:318] CGROUPS_CPU: enabled
	I0926 23:19:37.594784  502638 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I0926 23:19:37.594827  502638 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I0926 23:19:37.594869  502638 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I0926 23:19:37.594915  502638 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I0926 23:19:37.594990  502638 kubeadm.go:318] CGROUPS_PIDS: enabled
	I0926 23:19:37.595067  502638 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I0926 23:19:37.595129  502638 kubeadm.go:318] CGROUPS_IO: enabled
	I0926 23:19:37.652033  502638 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0926 23:19:37.652178  502638 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0926 23:19:37.652329  502638 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0926 23:19:37.658772  502638 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0926 23:19:37.660476  502638 out.go:252]   - Generating certificates and keys ...
	I0926 23:19:37.660591  502638 kubeadm.go:318] [certs] Using existing ca certificate authority
	I0926 23:19:37.660663  502638 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I0926 23:19:38.056635  502638 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0926 23:19:38.675751  502638 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I0926 23:19:38.971861  502638 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I0926 23:19:39.325172  502638 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I0926 23:19:39.422316  502638 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I0926 23:19:39.422555  502638 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [calico-227717 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0926 23:19:39.478151  502638 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I0926 23:19:39.478347  502638 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [calico-227717 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0926 23:19:39.739462  502638 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0926 23:19:39.878466  502638 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I0926 23:19:40.017854  502638 kubeadm.go:318] [certs] Generating "sa" key and public key
	I0926 23:19:40.018029  502638 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0926 23:19:40.220389  502638 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0926 23:19:40.412371  502638 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0926 23:19:40.978164  502638 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0926 23:19:41.049073  502638 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0926 23:19:41.139701  502638 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0926 23:19:41.140292  502638 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0926 23:19:41.144448  502638 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0926 23:19:41.145935  502638 out.go:252]   - Booting up control plane ...
	I0926 23:19:41.146063  502638 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0926 23:19:41.146197  502638 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0926 23:19:41.147220  502638 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0926 23:19:41.157008  502638 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0926 23:19:41.157252  502638 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0926 23:19:41.164019  502638 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0926 23:19:41.164389  502638 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0926 23:19:41.164437  502638 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I0926 23:19:41.242920  502638 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0926 23:19:41.243132  502638 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0926 23:19:42.244409  502638 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.0015951s
	I0926 23:19:42.247601  502638 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0926 23:19:42.247764  502638 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I0926 23:19:42.248035  502638 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0926 23:19:42.248214  502638 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0926 23:19:43.850873  502638 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.603124491s
	I0926 23:19:44.530600  502638 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.282910725s
	I0926 23:19:46.249839  502638 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.002104576s
	I0926 23:19:46.264273  502638 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0926 23:19:46.273179  502638 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0926 23:19:46.285573  502638 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I0926 23:19:46.285879  502638 kubeadm.go:318] [mark-control-plane] Marking the node calico-227717 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0926 23:19:46.294945  502638 kubeadm.go:318] [bootstrap-token] Using token: 86jge8.kird2qm4952h8ujw
	I0926 23:19:46.296179  502638 out.go:252]   - Configuring RBAC rules ...
	I0926 23:19:46.296369  502638 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0926 23:19:46.301423  502638 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0926 23:19:46.307536  502638 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0926 23:19:46.310289  502638 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0926 23:19:46.312972  502638 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0926 23:19:46.316514  502638 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0926 23:19:46.655474  502638 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0926 23:19:47.072061  502638 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I0926 23:19:47.657746  502638 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I0926 23:19:47.658700  502638 kubeadm.go:318] 
	I0926 23:19:47.658817  502638 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I0926 23:19:47.658833  502638 kubeadm.go:318] 
	I0926 23:19:47.658903  502638 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I0926 23:19:47.658910  502638 kubeadm.go:318] 
	I0926 23:19:47.658935  502638 kubeadm.go:318]   mkdir -p $HOME/.kube
	I0926 23:19:47.659027  502638 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0926 23:19:47.659151  502638 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0926 23:19:47.659166  502638 kubeadm.go:318] 
	I0926 23:19:47.659241  502638 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I0926 23:19:47.659250  502638 kubeadm.go:318] 
	I0926 23:19:47.659312  502638 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0926 23:19:47.659323  502638 kubeadm.go:318] 
	I0926 23:19:47.659385  502638 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I0926 23:19:47.659480  502638 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0926 23:19:47.659573  502638 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0926 23:19:47.659585  502638 kubeadm.go:318] 
	I0926 23:19:47.659709  502638 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I0926 23:19:47.659840  502638 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I0926 23:19:47.659851  502638 kubeadm.go:318] 
	I0926 23:19:47.659976  502638 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 86jge8.kird2qm4952h8ujw \
	I0926 23:19:47.660110  502638 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3d6fdc47b9a05c23d4386afe181df704d515c8f0f79bc04c09ac3ae58668e55b \
	I0926 23:19:47.660135  502638 kubeadm.go:318] 	--control-plane 
	I0926 23:19:47.660139  502638 kubeadm.go:318] 
	I0926 23:19:47.660214  502638 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I0926 23:19:47.660221  502638 kubeadm.go:318] 
	I0926 23:19:47.660293  502638 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 86jge8.kird2qm4952h8ujw \
	I0926 23:19:47.660401  502638 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3d6fdc47b9a05c23d4386afe181df704d515c8f0f79bc04c09ac3ae58668e55b 
	I0926 23:19:47.663474  502638 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1040-gcp\n", err: exit status 1
	I0926 23:19:47.663601  502638 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0926 23:19:47.663643  502638 cni.go:84] Creating CNI manager for "calico"
	I0926 23:19:47.665438  502638 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I0926 23:19:47.667845  502638 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0926 23:19:47.667873  502638 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (539470 bytes)
	I0926 23:19:47.689717  502638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0926 23:19:48.516464  502638 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0926 23:19:48.516577  502638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-227717 minikube.k8s.io/updated_at=2025_09_26T23_19_48_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47 minikube.k8s.io/name=calico-227717 minikube.k8s.io/primary=true
	I0926 23:19:48.516594  502638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:19:48.525421  502638 ops.go:34] apiserver oom_adj: -16
	I0926 23:19:48.595754  502638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:19:49.095885  502638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:19:49.596014  502638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:19:50.096300  502638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:19:50.596232  502638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:19:51.096266  502638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:19:51.596309  502638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:19:52.096209  502638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:19:52.596185  502638 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:19:52.687922  502638 kubeadm.go:1113] duration metric: took 4.171437734s to wait for elevateKubeSystemPrivileges
	I0926 23:19:52.688005  502638 kubeadm.go:402] duration metric: took 15.294607493s to StartCluster
	I0926 23:19:52.688037  502638 settings.go:142] acquiring lock: {Name:mk916931486ea7be0f55a69a0dcc9388c8f91bcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:19:52.688128  502638 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21642-208519/kubeconfig
	I0926 23:19:52.690188  502638 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-208519/kubeconfig: {Name:mk573e8783a83da2d326620e120d75cc729311d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:19:52.690459  502638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0926 23:19:52.690466  502638 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0926 23:19:52.690561  502638 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0926 23:19:52.690654  502638 addons.go:69] Setting storage-provisioner=true in profile "calico-227717"
	I0926 23:19:52.690682  502638 addons.go:238] Setting addon storage-provisioner=true in "calico-227717"
	I0926 23:19:52.690694  502638 addons.go:69] Setting default-storageclass=true in profile "calico-227717"
	I0926 23:19:52.690710  502638 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-227717"
	I0926 23:19:52.690713  502638 host.go:66] Checking if "calico-227717" exists ...
	I0926 23:19:52.690820  502638 config.go:182] Loaded profile config "calico-227717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:19:52.691288  502638 cli_runner.go:164] Run: docker container inspect calico-227717 --format={{.State.Status}}
	I0926 23:19:52.691340  502638 cli_runner.go:164] Run: docker container inspect calico-227717 --format={{.State.Status}}
	I0926 23:19:52.692415  502638 out.go:179] * Verifying Kubernetes components...
	I0926 23:19:52.696747  502638 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 23:19:52.717533  502638 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 23:19:52.718843  502638 addons.go:238] Setting addon default-storageclass=true in "calico-227717"
	I0926 23:19:52.718893  502638 host.go:66] Checking if "calico-227717" exists ...
	I0926 23:19:52.718937  502638 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 23:19:52.718955  502638 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0926 23:19:52.719044  502638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-227717
	I0926 23:19:52.719603  502638 cli_runner.go:164] Run: docker container inspect calico-227717 --format={{.State.Status}}
	I0926 23:19:52.753152  502638 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0926 23:19:52.753235  502638 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0926 23:19:52.753320  502638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-227717
	I0926 23:19:52.754251  502638 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/calico-227717/id_rsa Username:docker}
	I0926 23:19:52.783555  502638 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/calico-227717/id_rsa Username:docker}
	I0926 23:19:52.810900  502638 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0926 23:19:52.848971  502638 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 23:19:52.884862  502638 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 23:19:52.915529  502638 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0926 23:19:53.061431  502638 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I0926 23:19:53.061766  502638 node_ready.go:35] waiting up to 15m0s for node "calico-227717" to be "Ready" ...
	I0926 23:19:53.298227  502638 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0926 23:19:53.299328  502638 addons.go:514] duration metric: took 608.769228ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0926 23:19:53.565361  502638 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-227717" context rescaled to 1 replicas
	W0926 23:19:55.065274  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:19:57.565007  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:19:59.565497  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:20:01.566072  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:20:04.065401  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:20:06.564844  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:20:09.065233  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:20:11.065283  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:20:13.065950  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:20:15.565139  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:20:17.565428  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:20:20.065589  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:20:22.067272  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:20:24.565443  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:20:26.568324  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:20:29.065757  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:20:31.566079  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:20:34.065474  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:20:36.565755  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:20:38.566289  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:20:41.066237  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:20:43.565174  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:20:45.565231  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:20:47.565936  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:20:49.566237  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:20:52.064815  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:20:54.064968  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:20:56.565545  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:20:58.565731  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:21:01.064857  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:21:03.065300  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:21:05.065591  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:21:07.565697  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:21:10.065004  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:21:12.065213  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:21:14.564983  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:21:16.565274  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:21:19.065613  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:21:21.565229  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:21:23.565670  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:21:26.065280  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:21:28.565890  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:21:31.065927  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:21:33.565371  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:21:36.065151  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:21:38.569330  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:21:41.066422  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:21:43.564776  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:21:45.564887  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:21:47.565478  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:21:50.065155  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:21:52.065928  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:21:54.565876  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:21:57.068365  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:21:59.567472  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:02.066011  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:04.565018  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:07.064948  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:09.065330  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:11.130757  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:13.565297  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:16.068512  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:18.565528  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:21.064422  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:23.065631  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:25.565212  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:28.065074  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:30.065808  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:32.565611  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:34.565720  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:37.065401  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:39.565057  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:42.064614  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:44.065719  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:46.065899  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:48.565455  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:50.565543  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:53.064625  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:55.564781  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:57.564857  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:59.565139  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:01.565709  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:04.064862  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:06.065309  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:08.565165  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:10.565349  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:13.064920  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:15.564882  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:17.565258  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:20.064964  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:22.065467  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:24.564804  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:27.064970  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:29.564974  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:31.565357  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:34.065050  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:36.065589  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:38.565336  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:40.565539  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:43.065314  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:45.565284  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:48.065588  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:50.564735  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:52.565357  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:55.064557  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:57.064911  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:59.065244  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:01.565478  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:04.064628  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:06.065321  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:08.065532  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:10.564829  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:13.064754  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:15.065614  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:17.065756  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:19.564841  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:22.065034  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:24.564595  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:26.564894  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:29.064518  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:31.065069  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:33.065637  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:35.564512  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:37.565224  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:40.064891  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:42.564450  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:44.564539  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:46.564928  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:49.064762  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:51.065432  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:53.065715  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:55.564787  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:57.565037  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:00.064879  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:02.564463  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:04.564913  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:06.564947  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:09.065340  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:11.564448  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:13.565280  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:16.065054  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:18.065272  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:20.564625  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:22.564765  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:24.564945  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:27.065003  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:29.564592  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:31.564931  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:34.064581  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:36.064650  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:38.064924  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:40.564651  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:42.564922  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:45.064870  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:47.064953  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:49.564789  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:52.064916  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:54.065074  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:56.065254  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:58.565465  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:01.064837  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:03.064944  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:05.565321  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:07.565411  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:10.065463  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:12.564627  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:14.565619  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:17.065374  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:19.065839  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:21.565515  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:24.065887  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:26.565198  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:28.565808  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:31.065709  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:33.564549  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:35.564934  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:37.565293  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:39.565396  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:42.064557  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:44.064791  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:46.065519  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:48.565396  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:51.065348  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:53.065428  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:55.564828  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:57.564986  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:00.064983  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:02.565254  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:05.065078  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:07.565324  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:10.065527  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:12.564984  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:14.565259  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:17.065067  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:19.065868  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:21.066053  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:23.565113  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:26.065444  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:28.565677  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:31.065416  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:33.564549  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:35.565081  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:38.064917  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:40.064988  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:42.565349  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:45.064763  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:47.065214  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:49.065374  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:51.565478  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:54.064758  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:56.065527  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:58.565674  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:01.064995  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:03.564883  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:06.065272  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:08.565058  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:11.065185  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:13.065451  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:15.564942  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:18.064690  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:20.065053  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:22.564741  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:25.064781  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:27.564948  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:30.065326  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:32.565597  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:35.065606  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:37.564554  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:40.064777  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:42.065566  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:44.564533  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:46.564833  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:49.064915  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:51.565366  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:54.064684  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:56.064852  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:58.564817  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:01.064862  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:03.065394  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:05.565439  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:07.565568  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:10.064768  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:12.565208  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:14.565245  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:17.064833  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:19.564770  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:21.565054  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:24.065056  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:26.564682  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:28.564896  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:30.565041  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:32.565329  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:35.065524  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:37.564885  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:39.565653  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:42.064831  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:44.564953  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:46.565055  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:49.065043  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:51.065368  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:53.065413  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:55.065561  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:57.565605  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:00.065376  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:02.565401  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:05.064645  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:07.065648  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:09.564831  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:11.565459  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:14.065219  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:16.565047  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:19.064619  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:21.064896  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:23.065443  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:25.564852  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:27.565587  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:29.565712  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:32.064713  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:34.065818  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:36.565391  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:39.065549  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:41.564965  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:44.064918  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:46.564658  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:49.064802  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:51.065546  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:53.565564  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:56.065069  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:58.565065  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:01.064634  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:03.064961  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:05.565299  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:08.065343  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:10.565219  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:13.065686  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:15.565051  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:17.565529  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:20.065123  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:22.565013  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:25.064994  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:27.065071  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:29.564542  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:31.567774  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:34.065367  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:36.065456  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:38.564753  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:41.064982  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:43.065326  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:45.065594  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:47.564507  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:49.565300  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:52.065201  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:54.564839  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:57.065321  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:59.565335  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:01.565492  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:04.064819  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:06.564670  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:09.064980  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:11.564885  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:13.565398  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:16.064357  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:18.064744  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:20.064824  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:22.064963  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:24.564752  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:27.065158  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:29.564900  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:32.064965  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:34.564624  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:36.564810  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:38.565152  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:41.065677  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:43.565104  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:45.565388  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:48.065280  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:50.065503  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:52.564955  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:54.565663  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:57.064851  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:59.565136  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:02.065201  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:04.564768  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:06.565023  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:08.565067  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:10.565457  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:12.565584  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:15.065783  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:17.564645  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:20.064724  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:22.065420  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:24.564471  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:26.565394  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:29.065254  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:31.564929  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:33.565426  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:36.065062  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:38.564876  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:41.065201  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:43.065410  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:45.565037  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:47.565081  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:49.565495  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:52.065079  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:54.565393  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:57.064581  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:59.065447  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:34:01.565229  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:34:04.065224  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:34:06.565641  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:34:09.064518  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:34:11.064735  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:34:13.567010  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:34:16.064523  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:34:18.564688  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:34:20.565527  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:34:23.065081  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:34:25.564655  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:34:27.565279  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:34:30.064633  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:34:32.064819  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:34:34.564869  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:34:36.565269  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:34:39.065288  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:34:41.564991  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:34:44.065013  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:34:46.065220  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:34:48.565409  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:34:51.064703  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	I0926 23:34:53.062819  502638 node_ready.go:38] duration metric: took 15m0.001007117s for node "calico-227717" to be "Ready" ...
	I0926 23:34:53.064890  502638 out.go:203] 
	W0926 23:34:53.066104  502638 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W0926 23:34:53.066123  502638 out.go:285] * 
	* 
	W0926 23:34:53.067854  502638 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 23:34:53.068830  502638 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (925.68s)
E0926 23:36:07.769555  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/custom-flannel-227717/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:36:39.108875  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/enable-default-cni-227717/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:37:05.775557  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/old-k8s-version-413278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:37:12.142943  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:37:19.817394  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/no-preload-639473/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:37:22.131283  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/flannel-227717/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:37:41.655071  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:38:28.839649  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/old-k8s-version-413278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:38:42.883075  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/no-preload-639473/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:38:43.694426  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/functional-383702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hjt5g" [f3e14755-2887-45b0-be6b-6ce721ec83dc] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:337: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-441435 -n default-k8s-diff-port-441435
start_stop_delete_test.go:272: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-09-26 23:30:00.895898167 +0000 UTC m=+3638.805834429
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context default-k8s-diff-port-441435 describe po kubernetes-dashboard-855c9754f9-hjt5g -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context default-k8s-diff-port-441435 describe po kubernetes-dashboard-855c9754f9-hjt5g -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-hjt5g
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             default-k8s-diff-port-441435/192.168.94.2
Start Time:       Fri, 26 Sep 2025 23:20:26 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2vq4r (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-2vq4r:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  9m34s                 default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hjt5g to default-k8s-diff-port-441435
Warning  Failed     7m10s (x2 over 9m4s)  kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    4m7s (x5 over 9m34s)  kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     3m7s (x5 over 9m4s)   kubelet            Error: ErrImagePull
Warning  Failed     3m7s (x3 over 8m4s)   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": loading manifest for target platform: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     2m4s (x16 over 9m3s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    61s (x21 over 9m3s)   kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context default-k8s-diff-port-441435 logs kubernetes-dashboard-855c9754f9-hjt5g -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-441435 logs kubernetes-dashboard-855c9754f9-hjt5g -n kubernetes-dashboard: exit status 1 (73.628808ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-hjt5g" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context default-k8s-diff-port-441435 logs kubernetes-dashboard-855c9754f9-hjt5g -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-441435
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-441435:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "faff2fd42df2d5e9047db5fb9d655933514f60139430dd9aeb602219242a3147",
	        "Created": "2025-09-26T23:18:36.870642262Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 509025,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-26T23:20:10.302958143Z",
	            "FinishedAt": "2025-09-26T23:20:09.423908983Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/faff2fd42df2d5e9047db5fb9d655933514f60139430dd9aeb602219242a3147/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/faff2fd42df2d5e9047db5fb9d655933514f60139430dd9aeb602219242a3147/hostname",
	        "HostsPath": "/var/lib/docker/containers/faff2fd42df2d5e9047db5fb9d655933514f60139430dd9aeb602219242a3147/hosts",
	        "LogPath": "/var/lib/docker/containers/faff2fd42df2d5e9047db5fb9d655933514f60139430dd9aeb602219242a3147/faff2fd42df2d5e9047db5fb9d655933514f60139430dd9aeb602219242a3147-json.log",
	        "Name": "/default-k8s-diff-port-441435",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-441435:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-441435",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "faff2fd42df2d5e9047db5fb9d655933514f60139430dd9aeb602219242a3147",
	                "LowerDir": "/var/lib/docker/overlay2/f25f005a60672920cf81f4a02033f3002103e9e07337b8cfc62597d68a468832-init/diff:/var/lib/docker/overlay2/539bc53ffef0f27e9ac4c376a14359e91e4b4c4b56d5675ca6caeaaab94e33fb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f25f005a60672920cf81f4a02033f3002103e9e07337b8cfc62597d68a468832/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f25f005a60672920cf81f4a02033f3002103e9e07337b8cfc62597d68a468832/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f25f005a60672920cf81f4a02033f3002103e9e07337b8cfc62597d68a468832/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-441435",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-441435/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-441435",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-441435",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-441435",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fe1778a957bdd3a1569d5b718442bd0be6d75eacaea1973c3697b05f5c62194f",
	            "SandboxKey": "/var/run/docker/netns/fe1778a957bd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-441435": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:29:40:e4:a2:40",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7f3a1c78f885aa3cc6f148623dfbda6420751a6ff0ff6f75cff0c1de9224dfed",
	                    "EndpointID": "bedf54f72fd29de78316353893340b8a723c4cfeb59df455f8358005708f3b95",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-441435",
	                        "faff2fd42df2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-441435 -n default-k8s-diff-port-441435
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-441435 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-441435 logs -n 25: (1.248160325s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────┬───────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                      │    PROFILE    │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────┼───────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-227717 sudo iptables -t nat -L -n -v                                 │ bridge-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:22 UTC │ 26 Sep 25 23:23 UTC │
	│ ssh     │ -p bridge-227717 sudo systemctl status kubelet --all --full --no-pager         │ bridge-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:23 UTC │ 26 Sep 25 23:23 UTC │
	│ ssh     │ -p bridge-227717 sudo systemctl cat kubelet --no-pager                         │ bridge-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:23 UTC │ 26 Sep 25 23:23 UTC │
	│ ssh     │ -p bridge-227717 sudo journalctl -xeu kubelet --all --full --no-pager          │ bridge-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:23 UTC │ 26 Sep 25 23:23 UTC │
	│ ssh     │ -p bridge-227717 sudo cat /etc/kubernetes/kubelet.conf                         │ bridge-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:23 UTC │ 26 Sep 25 23:23 UTC │
	│ ssh     │ -p bridge-227717 sudo cat /var/lib/kubelet/config.yaml                         │ bridge-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:23 UTC │ 26 Sep 25 23:23 UTC │
	│ ssh     │ -p bridge-227717 sudo systemctl status docker --all --full --no-pager          │ bridge-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:23 UTC │                     │
	│ ssh     │ -p bridge-227717 sudo systemctl cat docker --no-pager                          │ bridge-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:23 UTC │ 26 Sep 25 23:23 UTC │
	│ ssh     │ -p bridge-227717 sudo cat /etc/docker/daemon.json                              │ bridge-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:23 UTC │                     │
	│ ssh     │ -p bridge-227717 sudo docker system info                                       │ bridge-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:23 UTC │                     │
	│ ssh     │ -p bridge-227717 sudo systemctl status cri-docker --all --full --no-pager      │ bridge-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:23 UTC │                     │
	│ ssh     │ -p bridge-227717 sudo systemctl cat cri-docker --no-pager                      │ bridge-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:23 UTC │ 26 Sep 25 23:23 UTC │
	│ ssh     │ -p bridge-227717 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ bridge-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:23 UTC │                     │
	│ ssh     │ -p bridge-227717 sudo cat /usr/lib/systemd/system/cri-docker.service           │ bridge-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:23 UTC │ 26 Sep 25 23:23 UTC │
	│ ssh     │ -p bridge-227717 sudo cri-dockerd --version                                    │ bridge-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:23 UTC │ 26 Sep 25 23:23 UTC │
	│ ssh     │ -p bridge-227717 sudo systemctl status containerd --all --full --no-pager      │ bridge-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:23 UTC │                     │
	│ ssh     │ -p bridge-227717 sudo systemctl cat containerd --no-pager                      │ bridge-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:23 UTC │ 26 Sep 25 23:23 UTC │
	│ ssh     │ -p bridge-227717 sudo cat /lib/systemd/system/containerd.service               │ bridge-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:23 UTC │ 26 Sep 25 23:23 UTC │
	│ ssh     │ -p bridge-227717 sudo cat /etc/containerd/config.toml                          │ bridge-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:23 UTC │ 26 Sep 25 23:23 UTC │
	│ ssh     │ -p bridge-227717 sudo containerd config dump                                   │ bridge-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:23 UTC │ 26 Sep 25 23:23 UTC │
	│ ssh     │ -p bridge-227717 sudo systemctl status crio --all --full --no-pager            │ bridge-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:23 UTC │ 26 Sep 25 23:23 UTC │
	│ ssh     │ -p bridge-227717 sudo systemctl cat crio --no-pager                            │ bridge-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:23 UTC │ 26 Sep 25 23:23 UTC │
	│ ssh     │ -p bridge-227717 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ bridge-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:23 UTC │ 26 Sep 25 23:23 UTC │
	│ ssh     │ -p bridge-227717 sudo crio config                                              │ bridge-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:23 UTC │ 26 Sep 25 23:23 UTC │
	│ delete  │ -p bridge-227717                                                               │ bridge-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:23 UTC │ 26 Sep 25 23:23 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────┴───────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/26 23:22:08
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 23:22:08.279028  539238 out.go:360] Setting OutFile to fd 1 ...
	I0926 23:22:08.279349  539238 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:22:08.279360  539238 out.go:374] Setting ErrFile to fd 2...
	I0926 23:22:08.279364  539238 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:22:08.279528  539238 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-208519/.minikube/bin
	I0926 23:22:08.280079  539238 out.go:368] Setting JSON to false
	I0926 23:22:08.281299  539238 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":11077,"bootTime":1758917851,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 23:22:08.281388  539238 start.go:140] virtualization: kvm guest
	I0926 23:22:08.283540  539238 out.go:179] * [bridge-227717] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0926 23:22:08.285043  539238 out.go:179]   - MINIKUBE_LOCATION=21642
	I0926 23:22:08.285059  539238 notify.go:220] Checking for updates...
	I0926 23:22:08.287982  539238 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 23:22:08.289436  539238 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21642-208519/kubeconfig
	I0926 23:22:08.290681  539238 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-208519/.minikube
	I0926 23:22:08.292054  539238 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0926 23:22:08.293273  539238 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 23:22:08.295050  539238 config.go:182] Loaded profile config "calico-227717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:22:08.295193  539238 config.go:182] Loaded profile config "default-k8s-diff-port-441435": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:22:08.295291  539238 config.go:182] Loaded profile config "flannel-227717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:22:08.295460  539238 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 23:22:08.319504  539238 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0926 23:22:08.319654  539238 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 23:22:08.375935  539238 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-26 23:22:08.36470223 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 23:22:08.376119  539238 docker.go:318] overlay module found
	I0926 23:22:08.378140  539238 out.go:179] * Using the docker driver based on user configuration
	I0926 23:22:08.379639  539238 start.go:304] selected driver: docker
	I0926 23:22:08.379662  539238 start.go:924] validating driver "docker" against <nil>
	I0926 23:22:08.379677  539238 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 23:22:08.380420  539238 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 23:22:08.437454  539238 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-26 23:22:08.427736807 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 23:22:08.437614  539238 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0926 23:22:08.437845  539238 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 23:22:08.439688  539238 out.go:179] * Using Docker driver with root privileges
	I0926 23:22:08.441008  539238 cni.go:84] Creating CNI manager for "bridge"
	I0926 23:22:08.441030  539238 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0926 23:22:08.441130  539238 start.go:348] cluster config:
	{Name:bridge-227717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:bridge-227717 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterva
l:1m0s}
	I0926 23:22:08.442534  539238 out.go:179] * Starting "bridge-227717" primary control-plane node in "bridge-227717" cluster
	I0926 23:22:08.443844  539238 cache.go:123] Beginning downloading kic base image for docker with crio
	I0926 23:22:08.445170  539238 out.go:179] * Pulling base image v0.0.48 ...
	I0926 23:22:08.446359  539238 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0926 23:22:08.446397  539238 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0926 23:22:08.446404  539238 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21642-208519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0926 23:22:08.446414  539238 cache.go:58] Caching tarball of preloaded images
	I0926 23:22:08.446520  539238 preload.go:172] Found /home/jenkins/minikube-integration/21642-208519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0926 23:22:08.446534  539238 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0926 23:22:08.446643  539238 profile.go:143] Saving config to /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/config.json ...
	I0926 23:22:08.446667  539238 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/config.json: {Name:mk96aa4c4d7cc09ca7898d9a34b38afcf66f305a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:22:08.467252  539238 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0926 23:22:08.467269  539238 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0926 23:22:08.467284  539238 cache.go:232] Successfully downloaded all kic artifacts
	I0926 23:22:08.467317  539238 start.go:360] acquireMachinesLock for bridge-227717: {Name:mkeb267a799f13412ae5263736c628e51911a08b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 23:22:08.467417  539238 start.go:364] duration metric: took 81.336µs to acquireMachinesLock for "bridge-227717"
	I0926 23:22:08.467450  539238 start.go:93] Provisioning new machine with config: &{Name:bridge-227717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:bridge-227717 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0926 23:22:08.467561  539238 start.go:125] createHost starting for "" (driver="docker")
	I0926 23:22:05.943549  531066 system_pods.go:86] 7 kube-system pods found
	I0926 23:22:05.943587  531066 system_pods.go:89] "coredns-66bc5c9577-72ld9" [85e13695-188f-4f44-a5c4-6bce9f99c7e8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:22:05.943598  531066 system_pods.go:89] "etcd-flannel-227717" [3e96285c-a07e-43f2-8830-6eb0a1cee44b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:22:05.943606  531066 system_pods.go:89] "kube-apiserver-flannel-227717" [f8238904-4a3d-4074-aade-0c7897ff766f] Running
	I0926 23:22:05.943617  531066 system_pods.go:89] "kube-controller-manager-flannel-227717" [30e2c364-b847-48d4-a3c4-205af85cadbd] Running
	I0926 23:22:05.943626  531066 system_pods.go:89] "kube-proxy-94chj" [daf7349b-f564-44d5-a975-71afc5121328] Running
	I0926 23:22:05.943632  531066 system_pods.go:89] "kube-scheduler-flannel-227717" [acd16790-6d30-4d2c-9ac1-2213b7e4bbf0] Running
	I0926 23:22:05.943638  531066 system_pods.go:89] "storage-provisioner" [d6a5cbce-7387-48e7-9cae-7f1e0f1f7ea3] Running
	I0926 23:22:05.943675  531066 retry.go:31] will retry after 838.406774ms: missing components: kube-dns
	I0926 23:22:06.785872  531066 system_pods.go:86] 7 kube-system pods found
	I0926 23:22:06.785916  531066 system_pods.go:89] "coredns-66bc5c9577-72ld9" [85e13695-188f-4f44-a5c4-6bce9f99c7e8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:22:06.785946  531066 system_pods.go:89] "etcd-flannel-227717" [3e96285c-a07e-43f2-8830-6eb0a1cee44b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:22:06.785955  531066 system_pods.go:89] "kube-apiserver-flannel-227717" [f8238904-4a3d-4074-aade-0c7897ff766f] Running
	I0926 23:22:06.785963  531066 system_pods.go:89] "kube-controller-manager-flannel-227717" [30e2c364-b847-48d4-a3c4-205af85cadbd] Running
	I0926 23:22:06.785973  531066 system_pods.go:89] "kube-proxy-94chj" [daf7349b-f564-44d5-a975-71afc5121328] Running
	I0926 23:22:06.785979  531066 system_pods.go:89] "kube-scheduler-flannel-227717" [acd16790-6d30-4d2c-9ac1-2213b7e4bbf0] Running
	I0926 23:22:06.785985  531066 system_pods.go:89] "storage-provisioner" [d6a5cbce-7387-48e7-9cae-7f1e0f1f7ea3] Running
	I0926 23:22:06.786009  531066 retry.go:31] will retry after 896.684906ms: missing components: kube-dns
	I0926 23:22:07.686824  531066 system_pods.go:86] 7 kube-system pods found
	I0926 23:22:07.686860  531066 system_pods.go:89] "coredns-66bc5c9577-72ld9" [85e13695-188f-4f44-a5c4-6bce9f99c7e8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:22:07.686868  531066 system_pods.go:89] "etcd-flannel-227717" [3e96285c-a07e-43f2-8830-6eb0a1cee44b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:22:07.686873  531066 system_pods.go:89] "kube-apiserver-flannel-227717" [f8238904-4a3d-4074-aade-0c7897ff766f] Running
	I0926 23:22:07.686881  531066 system_pods.go:89] "kube-controller-manager-flannel-227717" [30e2c364-b847-48d4-a3c4-205af85cadbd] Running
	I0926 23:22:07.686887  531066 system_pods.go:89] "kube-proxy-94chj" [daf7349b-f564-44d5-a975-71afc5121328] Running
	I0926 23:22:07.686892  531066 system_pods.go:89] "kube-scheduler-flannel-227717" [acd16790-6d30-4d2c-9ac1-2213b7e4bbf0] Running
	I0926 23:22:07.686897  531066 system_pods.go:89] "storage-provisioner" [d6a5cbce-7387-48e7-9cae-7f1e0f1f7ea3] Running
	I0926 23:22:07.686916  531066 retry.go:31] will retry after 1.836710124s: missing components: kube-dns
	I0926 23:22:09.528120  531066 system_pods.go:86] 7 kube-system pods found
	I0926 23:22:09.528157  531066 system_pods.go:89] "coredns-66bc5c9577-72ld9" [85e13695-188f-4f44-a5c4-6bce9f99c7e8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:22:09.528164  531066 system_pods.go:89] "etcd-flannel-227717" [3e96285c-a07e-43f2-8830-6eb0a1cee44b] Running
	I0926 23:22:09.528171  531066 system_pods.go:89] "kube-apiserver-flannel-227717" [f8238904-4a3d-4074-aade-0c7897ff766f] Running
	I0926 23:22:09.528175  531066 system_pods.go:89] "kube-controller-manager-flannel-227717" [30e2c364-b847-48d4-a3c4-205af85cadbd] Running
	I0926 23:22:09.528180  531066 system_pods.go:89] "kube-proxy-94chj" [daf7349b-f564-44d5-a975-71afc5121328] Running
	I0926 23:22:09.528186  531066 system_pods.go:89] "kube-scheduler-flannel-227717" [acd16790-6d30-4d2c-9ac1-2213b7e4bbf0] Running
	I0926 23:22:09.528191  531066 system_pods.go:89] "storage-provisioner" [d6a5cbce-7387-48e7-9cae-7f1e0f1f7ea3] Running
	I0926 23:22:09.528214  531066 retry.go:31] will retry after 1.67750311s: missing components: kube-dns
	W0926 23:22:09.065330  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:11.130757  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	I0926 23:22:08.469282  539238 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0926 23:22:08.469472  539238 start.go:159] libmachine.API.Create for "bridge-227717" (driver="docker")
	I0926 23:22:08.469500  539238 client.go:168] LocalClient.Create starting
	I0926 23:22:08.469578  539238 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21642-208519/.minikube/certs/ca.pem
	I0926 23:22:08.469605  539238 main.go:141] libmachine: Decoding PEM data...
	I0926 23:22:08.469618  539238 main.go:141] libmachine: Parsing certificate...
	I0926 23:22:08.469708  539238 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21642-208519/.minikube/certs/cert.pem
	I0926 23:22:08.469734  539238 main.go:141] libmachine: Decoding PEM data...
	I0926 23:22:08.469748  539238 main.go:141] libmachine: Parsing certificate...
	I0926 23:22:08.470124  539238 cli_runner.go:164] Run: docker network inspect bridge-227717 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0926 23:22:08.486145  539238 cli_runner.go:211] docker network inspect bridge-227717 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0926 23:22:08.486219  539238 network_create.go:284] running [docker network inspect bridge-227717] to gather additional debugging logs...
	I0926 23:22:08.486245  539238 cli_runner.go:164] Run: docker network inspect bridge-227717
	W0926 23:22:08.501901  539238 cli_runner.go:211] docker network inspect bridge-227717 returned with exit code 1
	I0926 23:22:08.501932  539238 network_create.go:287] error running [docker network inspect bridge-227717]: docker network inspect bridge-227717: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network bridge-227717 not found
	I0926 23:22:08.501949  539238 network_create.go:289] output of [docker network inspect bridge-227717]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network bridge-227717 not found
	
	** /stderr **
	I0926 23:22:08.502033  539238 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0926 23:22:08.518845  539238 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-61b47db54300 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:16:5a:0f:e5:da:60} reservation:<nil>}
	I0926 23:22:08.519525  539238 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-d81bcc6cb1d8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:92:9e:9a:18:c3:8e} reservation:<nil>}
	I0926 23:22:08.520447  539238 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b6dea4b9b493 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:62:a1:51:b0:46:1c} reservation:<nil>}
	I0926 23:22:08.521576  539238 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cc95b0}
	I0926 23:22:08.521607  539238 network_create.go:124] attempt to create docker network bridge-227717 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0926 23:22:08.521659  539238 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-227717 bridge-227717
	I0926 23:22:08.580593  539238 network_create.go:108] docker network bridge-227717 192.168.76.0/24 created
	I0926 23:22:08.580622  539238 kic.go:121] calculated static IP "192.168.76.2" for the "bridge-227717" container
	I0926 23:22:08.580696  539238 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0926 23:22:08.598609  539238 cli_runner.go:164] Run: docker volume create bridge-227717 --label name.minikube.sigs.k8s.io=bridge-227717 --label created_by.minikube.sigs.k8s.io=true
	I0926 23:22:08.618045  539238 oci.go:103] Successfully created a docker volume bridge-227717
	I0926 23:22:08.618135  539238 cli_runner.go:164] Run: docker run --rm --name bridge-227717-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-227717 --entrypoint /usr/bin/test -v bridge-227717:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0926 23:22:08.988380  539238 oci.go:107] Successfully prepared a docker volume bridge-227717
	I0926 23:22:08.988423  539238 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0926 23:22:08.988444  539238 kic.go:194] Starting extracting preloaded images to volume ...
	I0926 23:22:08.988505  539238 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21642-208519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v bridge-227717:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0926 23:22:13.240265  539238 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21642-208519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v bridge-227717:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.251709332s)
	I0926 23:22:13.240299  539238 kic.go:203] duration metric: took 4.251851632s to extract preloaded images to volume ...
	W0926 23:22:13.240391  539238 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0926 23:22:13.240425  539238 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0926 23:22:13.240500  539238 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0926 23:22:11.209647  531066 system_pods.go:86] 7 kube-system pods found
	I0926 23:22:11.209695  531066 system_pods.go:89] "coredns-66bc5c9577-72ld9" [85e13695-188f-4f44-a5c4-6bce9f99c7e8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:22:11.209704  531066 system_pods.go:89] "etcd-flannel-227717" [3e96285c-a07e-43f2-8830-6eb0a1cee44b] Running
	I0926 23:22:11.209715  531066 system_pods.go:89] "kube-apiserver-flannel-227717" [f8238904-4a3d-4074-aade-0c7897ff766f] Running
	I0926 23:22:11.209722  531066 system_pods.go:89] "kube-controller-manager-flannel-227717" [30e2c364-b847-48d4-a3c4-205af85cadbd] Running
	I0926 23:22:11.209732  531066 system_pods.go:89] "kube-proxy-94chj" [daf7349b-f564-44d5-a975-71afc5121328] Running
	I0926 23:22:11.209738  531066 system_pods.go:89] "kube-scheduler-flannel-227717" [acd16790-6d30-4d2c-9ac1-2213b7e4bbf0] Running
	I0926 23:22:11.209746  531066 system_pods.go:89] "storage-provisioner" [d6a5cbce-7387-48e7-9cae-7f1e0f1f7ea3] Running
	I0926 23:22:11.209765  531066 retry.go:31] will retry after 2.403673484s: missing components: kube-dns
	I0926 23:22:13.620151  531066 system_pods.go:86] 7 kube-system pods found
	I0926 23:22:13.620193  531066 system_pods.go:89] "coredns-66bc5c9577-72ld9" [85e13695-188f-4f44-a5c4-6bce9f99c7e8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:22:13.620202  531066 system_pods.go:89] "etcd-flannel-227717" [3e96285c-a07e-43f2-8830-6eb0a1cee44b] Running
	I0926 23:22:13.620211  531066 system_pods.go:89] "kube-apiserver-flannel-227717" [f8238904-4a3d-4074-aade-0c7897ff766f] Running
	I0926 23:22:13.620217  531066 system_pods.go:89] "kube-controller-manager-flannel-227717" [30e2c364-b847-48d4-a3c4-205af85cadbd] Running
	I0926 23:22:13.620226  531066 system_pods.go:89] "kube-proxy-94chj" [daf7349b-f564-44d5-a975-71afc5121328] Running
	I0926 23:22:13.620232  531066 system_pods.go:89] "kube-scheduler-flannel-227717" [acd16790-6d30-4d2c-9ac1-2213b7e4bbf0] Running
	I0926 23:22:13.620237  531066 system_pods.go:89] "storage-provisioner" [d6a5cbce-7387-48e7-9cae-7f1e0f1f7ea3] Running
	I0926 23:22:13.620256  531066 retry.go:31] will retry after 2.413412869s: missing components: kube-dns
	I0926 23:22:13.294455  539238 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname bridge-227717 --name bridge-227717 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-227717 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=bridge-227717 --network bridge-227717 --ip 192.168.76.2 --volume bridge-227717:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0926 23:22:13.567341  539238 cli_runner.go:164] Run: docker container inspect bridge-227717 --format={{.State.Running}}
	I0926 23:22:13.585500  539238 cli_runner.go:164] Run: docker container inspect bridge-227717 --format={{.State.Status}}
	I0926 23:22:13.604156  539238 cli_runner.go:164] Run: docker exec bridge-227717 stat /var/lib/dpkg/alternatives/iptables
	I0926 23:22:13.650955  539238 oci.go:144] the created container "bridge-227717" has a running status.
	I0926 23:22:13.650986  539238 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21642-208519/.minikube/machines/bridge-227717/id_rsa...
	I0926 23:22:13.741225  539238 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21642-208519/.minikube/machines/bridge-227717/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0926 23:22:13.768466  539238 cli_runner.go:164] Run: docker container inspect bridge-227717 --format={{.State.Status}}
	I0926 23:22:13.789477  539238 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0926 23:22:13.789506  539238 kic_runner.go:114] Args: [docker exec --privileged bridge-227717 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0926 23:22:13.845920  539238 cli_runner.go:164] Run: docker container inspect bridge-227717 --format={{.State.Status}}
	I0926 23:22:13.867558  539238 machine.go:93] provisionDockerMachine start ...
	I0926 23:22:13.867669  539238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-227717
	I0926 23:22:13.889876  539238 main.go:141] libmachine: Using SSH client type: native
	I0926 23:22:13.890267  539238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0926 23:22:13.890291  539238 main.go:141] libmachine: About to run SSH command:
	hostname
	I0926 23:22:14.033514  539238 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-227717
	
	I0926 23:22:14.033546  539238 ubuntu.go:182] provisioning hostname "bridge-227717"
	I0926 23:22:14.033615  539238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-227717
	I0926 23:22:14.054267  539238 main.go:141] libmachine: Using SSH client type: native
	I0926 23:22:14.054527  539238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0926 23:22:14.054544  539238 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-227717 && echo "bridge-227717" | sudo tee /etc/hostname
	I0926 23:22:14.206907  539238 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-227717
	
	I0926 23:22:14.207004  539238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-227717
	I0926 23:22:14.227235  539238 main.go:141] libmachine: Using SSH client type: native
	I0926 23:22:14.227550  539238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0926 23:22:14.227580  539238 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-227717' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-227717/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-227717' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0926 23:22:14.365067  539238 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 23:22:14.365109  539238 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21642-208519/.minikube CaCertPath:/home/jenkins/minikube-integration/21642-208519/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21642-208519/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21642-208519/.minikube}
	I0926 23:22:14.365166  539238 ubuntu.go:190] setting up certificates
	I0926 23:22:14.365184  539238 provision.go:84] configureAuth start
	I0926 23:22:14.365237  539238 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-227717
	I0926 23:22:14.383845  539238 provision.go:143] copyHostCerts
	I0926 23:22:14.383915  539238 exec_runner.go:144] found /home/jenkins/minikube-integration/21642-208519/.minikube/key.pem, removing ...
	I0926 23:22:14.383931  539238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21642-208519/.minikube/key.pem
	I0926 23:22:14.384004  539238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-208519/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21642-208519/.minikube/key.pem (1675 bytes)
	I0926 23:22:14.384156  539238 exec_runner.go:144] found /home/jenkins/minikube-integration/21642-208519/.minikube/ca.pem, removing ...
	I0926 23:22:14.384171  539238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21642-208519/.minikube/ca.pem
	I0926 23:22:14.384215  539238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-208519/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21642-208519/.minikube/ca.pem (1078 bytes)
	I0926 23:22:14.384328  539238 exec_runner.go:144] found /home/jenkins/minikube-integration/21642-208519/.minikube/cert.pem, removing ...
	I0926 23:22:14.384341  539238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21642-208519/.minikube/cert.pem
	I0926 23:22:14.384382  539238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-208519/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21642-208519/.minikube/cert.pem (1123 bytes)
	I0926 23:22:14.384477  539238 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21642-208519/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21642-208519/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21642-208519/.minikube/certs/ca-key.pem org=jenkins.bridge-227717 san=[127.0.0.1 192.168.76.2 bridge-227717 localhost minikube]
	I0926 23:22:14.555752  539238 provision.go:177] copyRemoteCerts
	I0926 23:22:14.555816  539238 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 23:22:14.555853  539238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-227717
	I0926 23:22:14.574627  539238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/bridge-227717/id_rsa Username:docker}
	I0926 23:22:14.673152  539238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0926 23:22:14.701442  539238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0926 23:22:14.726795  539238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0926 23:22:14.752197  539238 provision.go:87] duration metric: took 386.996004ms to configureAuth
	I0926 23:22:14.752227  539238 ubuntu.go:206] setting minikube options for container-runtime
	I0926 23:22:14.752419  539238 config.go:182] Loaded profile config "bridge-227717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:22:14.752542  539238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-227717
	I0926 23:22:14.770615  539238 main.go:141] libmachine: Using SSH client type: native
	I0926 23:22:14.770891  539238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0926 23:22:14.770915  539238 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0926 23:22:15.020349  539238 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0926 23:22:15.020380  539238 machine.go:96] duration metric: took 1.152796056s to provisionDockerMachine
	I0926 23:22:15.020396  539238 client.go:171] duration metric: took 6.550890038s to LocalClient.Create
	I0926 23:22:15.020418  539238 start.go:167] duration metric: took 6.550944995s to libmachine.API.Create "bridge-227717"
	I0926 23:22:15.020427  539238 start.go:293] postStartSetup for "bridge-227717" (driver="docker")
	I0926 23:22:15.020442  539238 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 23:22:15.020513  539238 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 23:22:15.020558  539238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-227717
	I0926 23:22:15.038720  539238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/bridge-227717/id_rsa Username:docker}
	I0926 23:22:15.139176  539238 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 23:22:15.142726  539238 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0926 23:22:15.142764  539238 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0926 23:22:15.142777  539238 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0926 23:22:15.142786  539238 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0926 23:22:15.142798  539238 filesync.go:126] Scanning /home/jenkins/minikube-integration/21642-208519/.minikube/addons for local assets ...
	I0926 23:22:15.142856  539238 filesync.go:126] Scanning /home/jenkins/minikube-integration/21642-208519/.minikube/files for local assets ...
	I0926 23:22:15.142958  539238 filesync.go:149] local asset: /home/jenkins/minikube-integration/21642-208519/.minikube/files/etc/ssl/certs/2121372.pem -> 2121372.pem in /etc/ssl/certs
	I0926 23:22:15.143056  539238 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0926 23:22:15.152523  539238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/files/etc/ssl/certs/2121372.pem --> /etc/ssl/certs/2121372.pem (1708 bytes)
	I0926 23:22:15.181057  539238 start.go:296] duration metric: took 160.602622ms for postStartSetup
	I0926 23:22:15.181419  539238 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-227717
	I0926 23:22:15.200373  539238 profile.go:143] Saving config to /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/config.json ...
	I0926 23:22:15.200595  539238 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0926 23:22:15.200647  539238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-227717
	I0926 23:22:15.221129  539238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/bridge-227717/id_rsa Username:docker}
	I0926 23:22:15.319393  539238 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0926 23:22:15.324724  539238 start.go:128] duration metric: took 6.857145072s to createHost
	I0926 23:22:15.324751  539238 start.go:83] releasing machines lock for "bridge-227717", held for 6.857318622s
	I0926 23:22:15.324833  539238 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-227717
	I0926 23:22:15.344474  539238 ssh_runner.go:195] Run: cat /version.json
	I0926 23:22:15.344523  539238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-227717
	I0926 23:22:15.344587  539238 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0926 23:22:15.344658  539238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-227717
	I0926 23:22:15.364232  539238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/bridge-227717/id_rsa Username:docker}
	I0926 23:22:15.364724  539238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/bridge-227717/id_rsa Username:docker}
	I0926 23:22:15.456440  539238 ssh_runner.go:195] Run: systemctl --version
	I0926 23:22:15.530549  539238 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0926 23:22:15.674857  539238 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0926 23:22:15.679887  539238 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 23:22:15.703285  539238 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0926 23:22:15.703355  539238 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 23:22:15.732546  539238 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0926 23:22:15.732568  539238 start.go:495] detecting cgroup driver to use...
	I0926 23:22:15.732598  539238 detect.go:190] detected "systemd" cgroup driver on host os
	I0926 23:22:15.732641  539238 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0926 23:22:15.748391  539238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 23:22:15.760275  539238 docker.go:218] disabling cri-docker service (if available) ...
	I0926 23:22:15.760346  539238 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0926 23:22:15.774861  539238 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0926 23:22:15.790472  539238 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0926 23:22:15.863052  539238 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0926 23:22:15.936335  539238 docker.go:234] disabling docker service ...
	I0926 23:22:15.936392  539238 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0926 23:22:15.955556  539238 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0926 23:22:15.968730  539238 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0926 23:22:16.035211  539238 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0926 23:22:16.209853  539238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0926 23:22:16.222129  539238 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 23:22:16.240570  539238 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0926 23:22:16.240639  539238 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:22:16.253932  539238 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0926 23:22:16.254025  539238 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:22:16.265787  539238 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:22:16.279948  539238 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:22:16.292673  539238 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 23:22:16.303007  539238 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:22:16.313783  539238 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:22:16.331734  539238 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:22:16.342483  539238 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 23:22:16.351439  539238 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 23:22:16.360720  539238 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 23:22:16.428173  539238 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0926 23:22:16.524622  539238 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0926 23:22:16.524681  539238 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0926 23:22:16.528749  539238 start.go:563] Will wait 60s for crictl version
	I0926 23:22:16.528810  539238 ssh_runner.go:195] Run: which crictl
	I0926 23:22:16.532388  539238 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0926 23:22:16.568641  539238 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0926 23:22:16.568735  539238 ssh_runner.go:195] Run: crio --version
	I0926 23:22:16.606806  539238 ssh_runner.go:195] Run: crio --version
	I0926 23:22:16.644351  539238 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	W0926 23:22:13.565297  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:16.068512  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	I0926 23:22:16.645429  539238 cli_runner.go:164] Run: docker network inspect bridge-227717 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0926 23:22:16.662944  539238 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0926 23:22:16.667413  539238 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 23:22:16.679295  539238 kubeadm.go:883] updating cluster {Name:bridge-227717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:bridge-227717 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0926 23:22:16.679415  539238 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0926 23:22:16.679466  539238 ssh_runner.go:195] Run: sudo crictl images --output json
	I0926 23:22:16.752288  539238 crio.go:514] all images are preloaded for cri-o runtime.
	I0926 23:22:16.752310  539238 crio.go:433] Images already preloaded, skipping extraction
	I0926 23:22:16.752368  539238 ssh_runner.go:195] Run: sudo crictl images --output json
	I0926 23:22:16.788381  539238 crio.go:514] all images are preloaded for cri-o runtime.
	I0926 23:22:16.788407  539238 cache_images.go:85] Images are preloaded, skipping loading
	I0926 23:22:16.788420  539238 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.0 crio true true} ...
	I0926 23:22:16.788527  539238 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=bridge-227717 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:bridge-227717 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0926 23:22:16.788603  539238 ssh_runner.go:195] Run: crio config
	I0926 23:22:16.832420  539238 cni.go:84] Creating CNI manager for "bridge"
	I0926 23:22:16.832452  539238 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0926 23:22:16.832473  539238 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-227717 NodeName:bridge-227717 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0926 23:22:16.832611  539238 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-227717"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0926 23:22:16.832671  539238 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0926 23:22:16.842608  539238 binaries.go:44] Found k8s binaries, skipping transfer
	I0926 23:22:16.842676  539238 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0926 23:22:16.852376  539238 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0926 23:22:16.872244  539238 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0926 23:22:16.894296  539238 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0926 23:22:16.914270  539238 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0926 23:22:16.918747  539238 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 23:22:16.930823  539238 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 23:22:16.995253  539238 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 23:22:17.022712  539238 certs.go:69] Setting up /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717 for IP: 192.168.76.2
	I0926 23:22:17.022736  539238 certs.go:195] generating shared ca certs ...
	I0926 23:22:17.022767  539238 certs.go:227] acquiring lock for ca certs: {Name:mk7fa2bdff33a744d301294affc1d74bea26e4b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:22:17.022928  539238 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21642-208519/.minikube/ca.key
	I0926 23:22:17.022979  539238 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21642-208519/.minikube/proxy-client-ca.key
	I0926 23:22:17.022992  539238 certs.go:257] generating profile certs ...
	I0926 23:22:17.023065  539238 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/client.key
	I0926 23:22:17.023094  539238 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/client.crt with IP's: []
	I0926 23:22:17.257181  539238 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/client.crt ...
	I0926 23:22:17.257211  539238 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/client.crt: {Name:mk12ab5b701ec110fb8601a9bc3d04dbaa831776 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:22:17.257430  539238 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/client.key ...
	I0926 23:22:17.257446  539238 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/client.key: {Name:mka3f19ba6abd5a9770583f4d38a136a49d6e03f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:22:17.257565  539238 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/apiserver.key.f0da0de2
	I0926 23:22:17.257589  539238 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/apiserver.crt.f0da0de2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0926 23:22:17.460005  539238 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/apiserver.crt.f0da0de2 ...
	I0926 23:22:17.460034  539238 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/apiserver.crt.f0da0de2: {Name:mkadaa82cedee7fb0a867007c7de1d4d52a6f9f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:22:17.460246  539238 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/apiserver.key.f0da0de2 ...
	I0926 23:22:17.460265  539238 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/apiserver.key.f0da0de2: {Name:mk8641615c464c39cf0cbf1ceef0f2f47c5b6794 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:22:17.460374  539238 certs.go:382] copying /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/apiserver.crt.f0da0de2 -> /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/apiserver.crt
	I0926 23:22:17.460478  539238 certs.go:386] copying /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/apiserver.key.f0da0de2 -> /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/apiserver.key
	I0926 23:22:17.460561  539238 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/proxy-client.key
	I0926 23:22:17.460581  539238 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/proxy-client.crt with IP's: []
	I0926 23:22:17.579117  539238 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/proxy-client.crt ...
	I0926 23:22:17.579146  539238 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/proxy-client.crt: {Name:mk10d8d68dbb6b61b5b15fc73c8649e99c3edba3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:22:17.579316  539238 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/proxy-client.key ...
	I0926 23:22:17.579329  539238 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/proxy-client.key: {Name:mk67bae5a996719861df916e29855b00ad52ef70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:22:17.579503  539238 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-208519/.minikube/certs/212137.pem (1338 bytes)
	W0926 23:22:17.579560  539238 certs.go:480] ignoring /home/jenkins/minikube-integration/21642-208519/.minikube/certs/212137_empty.pem, impossibly tiny 0 bytes
	I0926 23:22:17.579575  539238 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-208519/.minikube/certs/ca-key.pem (1679 bytes)
	I0926 23:22:17.579601  539238 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-208519/.minikube/certs/ca.pem (1078 bytes)
	I0926 23:22:17.579626  539238 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-208519/.minikube/certs/cert.pem (1123 bytes)
	I0926 23:22:17.579660  539238 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-208519/.minikube/certs/key.pem (1675 bytes)
	I0926 23:22:17.579715  539238 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-208519/.minikube/files/etc/ssl/certs/2121372.pem (1708 bytes)
	I0926 23:22:17.580345  539238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0926 23:22:17.609625  539238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0926 23:22:17.636704  539238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0926 23:22:17.662622  539238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0926 23:22:17.689427  539238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0926 23:22:17.715876  539238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0926 23:22:17.743213  539238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0926 23:22:17.770497  539238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0926 23:22:17.796873  539238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/files/etc/ssl/certs/2121372.pem --> /usr/share/ca-certificates/2121372.pem (1708 bytes)
	I0926 23:22:17.826015  539238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0926 23:22:17.851210  539238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/certs/212137.pem --> /usr/share/ca-certificates/212137.pem (1338 bytes)
	I0926 23:22:17.875975  539238 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0926 23:22:17.894617  539238 ssh_runner.go:195] Run: openssl version
	I0926 23:22:17.901080  539238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2121372.pem && ln -fs /usr/share/ca-certificates/2121372.pem /etc/ssl/certs/2121372.pem"
	I0926 23:22:17.911627  539238 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2121372.pem
	I0926 23:22:17.915581  539238 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 26 22:36 /usr/share/ca-certificates/2121372.pem
	I0926 23:22:17.915645  539238 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2121372.pem
	I0926 23:22:17.923209  539238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2121372.pem /etc/ssl/certs/3ec20f2e.0"
	I0926 23:22:17.933073  539238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0926 23:22:17.943011  539238 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0926 23:22:17.947100  539238 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 26 22:29 /usr/share/ca-certificates/minikubeCA.pem
	I0926 23:22:17.947161  539238 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0926 23:22:17.954654  539238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0926 23:22:17.964786  539238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/212137.pem && ln -fs /usr/share/ca-certificates/212137.pem /etc/ssl/certs/212137.pem"
	I0926 23:22:17.976321  539238 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/212137.pem
	I0926 23:22:17.980796  539238 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 26 22:36 /usr/share/ca-certificates/212137.pem
	I0926 23:22:17.980870  539238 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/212137.pem
	I0926 23:22:17.988146  539238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/212137.pem /etc/ssl/certs/51391683.0"
	I0926 23:22:17.998575  539238 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0926 23:22:18.002077  539238 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0926 23:22:18.002161  539238 kubeadm.go:400] StartCluster: {Name:bridge-227717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:bridge-227717 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Sock
etVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 23:22:18.002245  539238 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0926 23:22:18.002309  539238 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0926 23:22:18.039056  539238 cri.go:89] found id: ""
	I0926 23:22:18.039141  539238 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0926 23:22:18.048717  539238 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0926 23:22:18.058247  539238 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0926 23:22:18.058305  539238 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0926 23:22:18.068379  539238 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0926 23:22:18.068400  539238 kubeadm.go:157] found existing configuration files:
	
	I0926 23:22:18.068443  539238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0926 23:22:18.077807  539238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0926 23:22:18.077878  539238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0926 23:22:18.087391  539238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0926 23:22:18.096651  539238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0926 23:22:18.096708  539238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0926 23:22:18.106651  539238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0926 23:22:18.116406  539238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0926 23:22:18.116495  539238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0926 23:22:18.125668  539238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0926 23:22:18.134938  539238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0926 23:22:18.135004  539238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0926 23:22:18.143877  539238 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0926 23:22:18.200556  539238 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1040-gcp\n", err: exit status 1
	I0926 23:22:18.259571  539238 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0926 23:22:16.038519  531066 system_pods.go:86] 7 kube-system pods found
	I0926 23:22:16.038550  531066 system_pods.go:89] "coredns-66bc5c9577-72ld9" [85e13695-188f-4f44-a5c4-6bce9f99c7e8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:22:16.038557  531066 system_pods.go:89] "etcd-flannel-227717" [3e96285c-a07e-43f2-8830-6eb0a1cee44b] Running
	I0926 23:22:16.038564  531066 system_pods.go:89] "kube-apiserver-flannel-227717" [f8238904-4a3d-4074-aade-0c7897ff766f] Running
	I0926 23:22:16.038568  531066 system_pods.go:89] "kube-controller-manager-flannel-227717" [30e2c364-b847-48d4-a3c4-205af85cadbd] Running
	I0926 23:22:16.038572  531066 system_pods.go:89] "kube-proxy-94chj" [daf7349b-f564-44d5-a975-71afc5121328] Running
	I0926 23:22:16.038576  531066 system_pods.go:89] "kube-scheduler-flannel-227717" [acd16790-6d30-4d2c-9ac1-2213b7e4bbf0] Running
	I0926 23:22:16.038579  531066 system_pods.go:89] "storage-provisioner" [d6a5cbce-7387-48e7-9cae-7f1e0f1f7ea3] Running
	I0926 23:22:16.038594  531066 retry.go:31] will retry after 4.392682378s: missing components: kube-dns
	I0926 23:22:20.436534  531066 system_pods.go:86] 7 kube-system pods found
	I0926 23:22:20.436571  531066 system_pods.go:89] "coredns-66bc5c9577-72ld9" [85e13695-188f-4f44-a5c4-6bce9f99c7e8] Running
	I0926 23:22:20.436580  531066 system_pods.go:89] "etcd-flannel-227717" [3e96285c-a07e-43f2-8830-6eb0a1cee44b] Running
	I0926 23:22:20.436586  531066 system_pods.go:89] "kube-apiserver-flannel-227717" [f8238904-4a3d-4074-aade-0c7897ff766f] Running
	I0926 23:22:20.436591  531066 system_pods.go:89] "kube-controller-manager-flannel-227717" [30e2c364-b847-48d4-a3c4-205af85cadbd] Running
	I0926 23:22:20.436596  531066 system_pods.go:89] "kube-proxy-94chj" [daf7349b-f564-44d5-a975-71afc5121328] Running
	I0926 23:22:20.436602  531066 system_pods.go:89] "kube-scheduler-flannel-227717" [acd16790-6d30-4d2c-9ac1-2213b7e4bbf0] Running
	I0926 23:22:20.436606  531066 system_pods.go:89] "storage-provisioner" [d6a5cbce-7387-48e7-9cae-7f1e0f1f7ea3] Running
	I0926 23:22:20.436618  531066 system_pods.go:126] duration metric: took 17.107747385s to wait for k8s-apps to be running ...
	I0926 23:22:20.436633  531066 system_svc.go:44] waiting for kubelet service to be running ....
	I0926 23:22:20.436690  531066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 23:22:20.449823  531066 system_svc.go:56] duration metric: took 13.177203ms WaitForService to wait for kubelet
	I0926 23:22:20.449863  531066 kubeadm.go:586] duration metric: took 20.96652036s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 23:22:20.449889  531066 node_conditions.go:102] verifying NodePressure condition ...
	I0926 23:22:20.452916  531066 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0926 23:22:20.452942  531066 node_conditions.go:123] node cpu capacity is 8
	I0926 23:22:20.452959  531066 node_conditions.go:105] duration metric: took 3.064689ms to run NodePressure ...
	I0926 23:22:20.452974  531066 start.go:241] waiting for startup goroutines ...
	I0926 23:22:20.452983  531066 start.go:246] waiting for cluster config update ...
	I0926 23:22:20.453000  531066 start.go:255] writing updated cluster config ...
	I0926 23:22:20.453333  531066 ssh_runner.go:195] Run: rm -f paused
	I0926 23:22:20.457376  531066 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0926 23:22:20.460804  531066 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-72ld9" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:20.465281  531066 pod_ready.go:94] pod "coredns-66bc5c9577-72ld9" is "Ready"
	I0926 23:22:20.465301  531066 pod_ready.go:86] duration metric: took 4.478615ms for pod "coredns-66bc5c9577-72ld9" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:20.467228  531066 pod_ready.go:83] waiting for pod "etcd-flannel-227717" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:20.470846  531066 pod_ready.go:94] pod "etcd-flannel-227717" is "Ready"
	I0926 23:22:20.470863  531066 pod_ready.go:86] duration metric: took 3.614994ms for pod "etcd-flannel-227717" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:20.474997  531066 pod_ready.go:83] waiting for pod "kube-apiserver-flannel-227717" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:20.478810  531066 pod_ready.go:94] pod "kube-apiserver-flannel-227717" is "Ready"
	I0926 23:22:20.478834  531066 pod_ready.go:86] duration metric: took 3.815303ms for pod "kube-apiserver-flannel-227717" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:20.480773  531066 pod_ready.go:83] waiting for pod "kube-controller-manager-flannel-227717" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:20.862171  531066 pod_ready.go:94] pod "kube-controller-manager-flannel-227717" is "Ready"
	I0926 23:22:20.862198  531066 pod_ready.go:86] duration metric: took 381.405612ms for pod "kube-controller-manager-flannel-227717" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:21.061277  531066 pod_ready.go:83] waiting for pod "kube-proxy-94chj" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:21.461664  531066 pod_ready.go:94] pod "kube-proxy-94chj" is "Ready"
	I0926 23:22:21.461693  531066 pod_ready.go:86] duration metric: took 400.390129ms for pod "kube-proxy-94chj" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:21.662293  531066 pod_ready.go:83] waiting for pod "kube-scheduler-flannel-227717" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:22.061295  531066 pod_ready.go:94] pod "kube-scheduler-flannel-227717" is "Ready"
	I0926 23:22:22.061322  531066 pod_ready.go:86] duration metric: took 399.003596ms for pod "kube-scheduler-flannel-227717" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:22.061333  531066 pod_ready.go:40] duration metric: took 1.603920934s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0926 23:22:22.107179  531066 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0926 23:22:22.109596  531066 out.go:179] * Done! kubectl is now configured to use "flannel-227717" cluster and "default" namespace by default
	W0926 23:22:18.565528  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:21.064422  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	I0926 23:22:27.152347  539238 kubeadm.go:318] [init] Using Kubernetes version: v1.34.0
	I0926 23:22:27.152424  539238 kubeadm.go:318] [preflight] Running pre-flight checks
	I0926 23:22:27.152531  539238 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I0926 23:22:27.152592  539238 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1040-gcp
	I0926 23:22:27.152626  539238 kubeadm.go:318] OS: Linux
	I0926 23:22:27.152666  539238 kubeadm.go:318] CGROUPS_CPU: enabled
	I0926 23:22:27.152735  539238 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I0926 23:22:27.152791  539238 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I0926 23:22:27.152838  539238 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I0926 23:22:27.152879  539238 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I0926 23:22:27.152927  539238 kubeadm.go:318] CGROUPS_PIDS: enabled
	I0926 23:22:27.152968  539238 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I0926 23:22:27.153015  539238 kubeadm.go:318] CGROUPS_IO: enabled
	I0926 23:22:27.153081  539238 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0926 23:22:27.153218  539238 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0926 23:22:27.153311  539238 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0926 23:22:27.153371  539238 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0926 23:22:27.156159  539238 out.go:252]   - Generating certificates and keys ...
	I0926 23:22:27.156233  539238 kubeadm.go:318] [certs] Using existing ca certificate authority
	I0926 23:22:27.156297  539238 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I0926 23:22:27.156358  539238 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0926 23:22:27.156422  539238 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I0926 23:22:27.156507  539238 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I0926 23:22:27.156557  539238 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I0926 23:22:27.156604  539238 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I0926 23:22:27.156733  539238 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [bridge-227717 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0926 23:22:27.156821  539238 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I0926 23:22:27.156943  539238 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [bridge-227717 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0926 23:22:27.157030  539238 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0926 23:22:27.157134  539238 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I0926 23:22:27.157202  539238 kubeadm.go:318] [certs] Generating "sa" key and public key
	I0926 23:22:27.157268  539238 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0926 23:22:27.157315  539238 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0926 23:22:27.157395  539238 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0926 23:22:27.157459  539238 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0926 23:22:27.157535  539238 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0926 23:22:27.157620  539238 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0926 23:22:27.157738  539238 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0926 23:22:27.157847  539238 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0926 23:22:27.159212  539238 out.go:252]   - Booting up control plane ...
	I0926 23:22:27.159292  539238 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0926 23:22:27.159395  539238 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0926 23:22:27.159502  539238 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0926 23:22:27.159618  539238 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0926 23:22:27.159713  539238 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0926 23:22:27.159830  539238 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0926 23:22:27.159912  539238 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0926 23:22:27.159971  539238 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I0926 23:22:27.160158  539238 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0926 23:22:27.160284  539238 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0926 23:22:27.160381  539238 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000861015s
	I0926 23:22:27.160528  539238 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0926 23:22:27.160668  539238 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I0926 23:22:27.160782  539238 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0926 23:22:27.160888  539238 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0926 23:22:27.160976  539238 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 919.813595ms
	I0926 23:22:27.161069  539238 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.894189857s
	I0926 23:22:27.161197  539238 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.501208699s
	I0926 23:22:27.161362  539238 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0926 23:22:27.161530  539238 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0926 23:22:27.161590  539238 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I0926 23:22:27.161806  539238 kubeadm.go:318] [mark-control-plane] Marking the node bridge-227717 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0926 23:22:27.161882  539238 kubeadm.go:318] [bootstrap-token] Using token: r5cn1d.nfbdwo5sx0g5pe6j
	I0926 23:22:27.163314  539238 out.go:252]   - Configuring RBAC rules ...
	I0926 23:22:27.163437  539238 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0926 23:22:27.163543  539238 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0926 23:22:27.163671  539238 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0926 23:22:27.163817  539238 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0926 23:22:27.163915  539238 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0926 23:22:27.163998  539238 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0926 23:22:27.164139  539238 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0926 23:22:27.164197  539238 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I0926 23:22:27.164236  539238 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I0926 23:22:27.164245  539238 kubeadm.go:318] 
	I0926 23:22:27.164309  539238 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I0926 23:22:27.164321  539238 kubeadm.go:318] 
	I0926 23:22:27.164430  539238 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I0926 23:22:27.164442  539238 kubeadm.go:318] 
	I0926 23:22:27.164475  539238 kubeadm.go:318]   mkdir -p $HOME/.kube
	I0926 23:22:27.164565  539238 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0926 23:22:27.164637  539238 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0926 23:22:27.164646  539238 kubeadm.go:318] 
	I0926 23:22:27.164727  539238 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I0926 23:22:27.164737  539238 kubeadm.go:318] 
	I0926 23:22:27.164812  539238 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0926 23:22:27.164822  539238 kubeadm.go:318] 
	I0926 23:22:27.164897  539238 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I0926 23:22:27.164984  539238 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0926 23:22:27.165046  539238 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0926 23:22:27.165053  539238 kubeadm.go:318] 
	I0926 23:22:27.165155  539238 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I0926 23:22:27.165233  539238 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I0926 23:22:27.165240  539238 kubeadm.go:318] 
	I0926 23:22:27.165304  539238 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token r5cn1d.nfbdwo5sx0g5pe6j \
	I0926 23:22:27.165404  539238 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3d6fdc47b9a05c23d4386afe181df704d515c8f0f79bc04c09ac3ae58668e55b \
	I0926 23:22:27.165431  539238 kubeadm.go:318] 	--control-plane 
	I0926 23:22:27.165441  539238 kubeadm.go:318] 
	I0926 23:22:27.165517  539238 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I0926 23:22:27.165523  539238 kubeadm.go:318] 
	I0926 23:22:27.165596  539238 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token r5cn1d.nfbdwo5sx0g5pe6j \
	I0926 23:22:27.165714  539238 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3d6fdc47b9a05c23d4386afe181df704d515c8f0f79bc04c09ac3ae58668e55b 
	I0926 23:22:27.165729  539238 cni.go:84] Creating CNI manager for "bridge"
	I0926 23:22:27.167835  539238 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	W0926 23:22:23.065631  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:25.565212  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	I0926 23:22:27.169116  539238 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0926 23:22:27.179575  539238 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0926 23:22:27.200877  539238 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0926 23:22:27.200951  539238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:22:27.200981  539238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-227717 minikube.k8s.io/updated_at=2025_09_26T23_22_27_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47 minikube.k8s.io/name=bridge-227717 minikube.k8s.io/primary=true
	I0926 23:22:27.277181  539238 ops.go:34] apiserver oom_adj: -16
	I0926 23:22:27.277240  539238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:22:27.777903  539238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:22:28.277680  539238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:22:28.778136  539238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:22:29.277671  539238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:22:29.777905  539238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:22:30.278283  539238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:22:30.778314  539238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:22:31.277691  539238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:22:31.777527  539238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:22:32.278280  539238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:22:32.365300  539238 kubeadm.go:1113] duration metric: took 5.164412944s to wait for elevateKubeSystemPrivileges
	I0926 23:22:32.365343  539238 kubeadm.go:402] duration metric: took 14.36318598s to StartCluster
	I0926 23:22:32.365366  539238 settings.go:142] acquiring lock: {Name:mk916931486ea7be0f55a69a0dcc9388c8f91bcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:22:32.365454  539238 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21642-208519/kubeconfig
	I0926 23:22:32.366919  539238 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-208519/kubeconfig: {Name:mk573e8783a83da2d326620e120d75cc729311d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:22:32.367231  539238 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0926 23:22:32.367244  539238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0926 23:22:32.367320  539238 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0926 23:22:32.367411  539238 addons.go:69] Setting storage-provisioner=true in profile "bridge-227717"
	I0926 23:22:32.367437  539238 addons.go:238] Setting addon storage-provisioner=true in "bridge-227717"
	I0926 23:22:32.367445  539238 config.go:182] Loaded profile config "bridge-227717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:22:32.367475  539238 host.go:66] Checking if "bridge-227717" exists ...
	I0926 23:22:32.367462  539238 addons.go:69] Setting default-storageclass=true in profile "bridge-227717"
	I0926 23:22:32.367524  539238 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-227717"
	I0926 23:22:32.367895  539238 cli_runner.go:164] Run: docker container inspect bridge-227717 --format={{.State.Status}}
	I0926 23:22:32.368081  539238 cli_runner.go:164] Run: docker container inspect bridge-227717 --format={{.State.Status}}
	I0926 23:22:32.372588  539238 out.go:179] * Verifying Kubernetes components...
	I0926 23:22:32.374180  539238 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 23:22:32.391326  539238 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0926 23:22:28.065074  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:30.065808  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	I0926 23:22:32.392164  539238 addons.go:238] Setting addon default-storageclass=true in "bridge-227717"
	I0926 23:22:32.392215  539238 host.go:66] Checking if "bridge-227717" exists ...
	I0926 23:22:32.392651  539238 cli_runner.go:164] Run: docker container inspect bridge-227717 --format={{.State.Status}}
	I0926 23:22:32.392890  539238 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 23:22:32.392919  539238 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0926 23:22:32.392974  539238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-227717
	I0926 23:22:32.418989  539238 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0926 23:22:32.419012  539238 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0926 23:22:32.419219  539238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-227717
	I0926 23:22:32.422169  539238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/bridge-227717/id_rsa Username:docker}
	I0926 23:22:32.449248  539238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/bridge-227717/id_rsa Username:docker}
	I0926 23:22:32.466393  539238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0926 23:22:32.513826  539238 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 23:22:32.543654  539238 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 23:22:32.568106  539238 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0926 23:22:32.673867  539238 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0926 23:22:32.675402  539238 node_ready.go:35] waiting up to 15m0s for node "bridge-227717" to be "Ready" ...
	I0926 23:22:32.688235  539238 node_ready.go:49] node "bridge-227717" is "Ready"
	I0926 23:22:32.688273  539238 node_ready.go:38] duration metric: took 12.8394ms for node "bridge-227717" to be "Ready" ...
	I0926 23:22:32.688293  539238 api_server.go:52] waiting for apiserver process to appear ...
	I0926 23:22:32.688346  539238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 23:22:32.915472  539238 api_server.go:72] duration metric: took 548.201756ms to wait for apiserver process to appear ...
	I0926 23:22:32.915499  539238 api_server.go:88] waiting for apiserver healthz status ...
	I0926 23:22:32.915521  539238 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0926 23:22:32.921705  539238 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0926 23:22:32.922848  539238 api_server.go:141] control plane version: v1.34.0
	I0926 23:22:32.922949  539238 api_server.go:131] duration metric: took 7.442211ms to wait for apiserver health ...
	I0926 23:22:32.922958  539238 system_pods.go:43] waiting for kube-system pods to appear ...
	I0926 23:22:32.924727  539238 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0926 23:22:32.926164  539238 addons.go:514] duration metric: took 558.848041ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0926 23:22:32.926599  539238 system_pods.go:59] 8 kube-system pods found
	I0926 23:22:32.926632  539238 system_pods.go:61] "coredns-66bc5c9577-49j55" [a4bd29c6-9d53-4d6a-9f15-bc7e8f89a0d6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:22:32.926642  539238 system_pods.go:61] "coredns-66bc5c9577-f2bz7" [e36e4f7b-8b76-4fc9-bda4-e2aa91f8434e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:22:32.926653  539238 system_pods.go:61] "etcd-bridge-227717" [c90194d2-92ea-4118-bb34-59a0735e245f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:22:32.926665  539238 system_pods.go:61] "kube-apiserver-bridge-227717" [82eddf9e-11c5-4cf4-860d-e74122443d94] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 23:22:32.926679  539238 system_pods.go:61] "kube-controller-manager-bridge-227717" [f55d8abf-5fb0-4cfc-806e-d6dfde578806] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 23:22:32.926689  539238 system_pods.go:61] "kube-proxy-47cgp" [b6a43704-cb57-4490-bfce-aeda1c269a71] Running
	I0926 23:22:32.926697  539238 system_pods.go:61] "kube-scheduler-bridge-227717" [c9b4fe5f-64e3-4cbb-b2f5-55d1dce2afae] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 23:22:32.926706  539238 system_pods.go:61] "storage-provisioner" [c5e9f540-b456-4ade-ad9f-32f2a5282c3a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0926 23:22:32.926717  539238 system_pods.go:74] duration metric: took 3.751675ms to wait for pod list to return data ...
	I0926 23:22:32.926731  539238 default_sa.go:34] waiting for default service account to be created ...
	I0926 23:22:32.929026  539238 default_sa.go:45] found service account: "default"
	I0926 23:22:32.929048  539238 default_sa.go:55] duration metric: took 2.308615ms for default service account to be created ...
	I0926 23:22:32.929058  539238 system_pods.go:116] waiting for k8s-apps to be running ...
	I0926 23:22:32.931675  539238 system_pods.go:86] 8 kube-system pods found
	I0926 23:22:32.931719  539238 system_pods.go:89] "coredns-66bc5c9577-49j55" [a4bd29c6-9d53-4d6a-9f15-bc7e8f89a0d6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:22:32.931733  539238 system_pods.go:89] "coredns-66bc5c9577-f2bz7" [e36e4f7b-8b76-4fc9-bda4-e2aa91f8434e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:22:32.931739  539238 system_pods.go:89] "etcd-bridge-227717" [c90194d2-92ea-4118-bb34-59a0735e245f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:22:32.931744  539238 system_pods.go:89] "kube-apiserver-bridge-227717" [82eddf9e-11c5-4cf4-860d-e74122443d94] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 23:22:32.931755  539238 system_pods.go:89] "kube-controller-manager-bridge-227717" [f55d8abf-5fb0-4cfc-806e-d6dfde578806] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 23:22:32.931761  539238 system_pods.go:89] "kube-proxy-47cgp" [b6a43704-cb57-4490-bfce-aeda1c269a71] Running
	I0926 23:22:32.931766  539238 system_pods.go:89] "kube-scheduler-bridge-227717" [c9b4fe5f-64e3-4cbb-b2f5-55d1dce2afae] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 23:22:32.931773  539238 system_pods.go:89] "storage-provisioner" [c5e9f540-b456-4ade-ad9f-32f2a5282c3a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0926 23:22:32.931792  539238 retry.go:31] will retry after 226.559612ms: missing components: kube-dns
	I0926 23:22:33.162971  539238 system_pods.go:86] 8 kube-system pods found
	I0926 23:22:33.163010  539238 system_pods.go:89] "coredns-66bc5c9577-49j55" [a4bd29c6-9d53-4d6a-9f15-bc7e8f89a0d6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:22:33.163026  539238 system_pods.go:89] "coredns-66bc5c9577-f2bz7" [e36e4f7b-8b76-4fc9-bda4-e2aa91f8434e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:22:33.163035  539238 system_pods.go:89] "etcd-bridge-227717" [c90194d2-92ea-4118-bb34-59a0735e245f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:22:33.163043  539238 system_pods.go:89] "kube-apiserver-bridge-227717" [82eddf9e-11c5-4cf4-860d-e74122443d94] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 23:22:33.163051  539238 system_pods.go:89] "kube-controller-manager-bridge-227717" [f55d8abf-5fb0-4cfc-806e-d6dfde578806] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 23:22:33.163065  539238 system_pods.go:89] "kube-proxy-47cgp" [b6a43704-cb57-4490-bfce-aeda1c269a71] Running
	I0926 23:22:33.163075  539238 system_pods.go:89] "kube-scheduler-bridge-227717" [c9b4fe5f-64e3-4cbb-b2f5-55d1dce2afae] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 23:22:33.163111  539238 system_pods.go:89] "storage-provisioner" [c5e9f540-b456-4ade-ad9f-32f2a5282c3a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0926 23:22:33.163138  539238 retry.go:31] will retry after 350.388001ms: missing components: kube-dns
	I0926 23:22:33.178947  539238 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-227717" context rescaled to 1 replicas
	W0926 23:22:32.565611  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:34.565720  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:37.065401  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	I0926 23:22:33.518595  539238 system_pods.go:86] 8 kube-system pods found
	I0926 23:22:33.518629  539238 system_pods.go:89] "coredns-66bc5c9577-49j55" [a4bd29c6-9d53-4d6a-9f15-bc7e8f89a0d6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:22:33.518638  539238 system_pods.go:89] "coredns-66bc5c9577-f2bz7" [e36e4f7b-8b76-4fc9-bda4-e2aa91f8434e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:22:33.518651  539238 system_pods.go:89] "etcd-bridge-227717" [c90194d2-92ea-4118-bb34-59a0735e245f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:22:33.518657  539238 system_pods.go:89] "kube-apiserver-bridge-227717" [82eddf9e-11c5-4cf4-860d-e74122443d94] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 23:22:33.518662  539238 system_pods.go:89] "kube-controller-manager-bridge-227717" [f55d8abf-5fb0-4cfc-806e-d6dfde578806] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 23:22:33.518666  539238 system_pods.go:89] "kube-proxy-47cgp" [b6a43704-cb57-4490-bfce-aeda1c269a71] Running
	I0926 23:22:33.518672  539238 system_pods.go:89] "kube-scheduler-bridge-227717" [c9b4fe5f-64e3-4cbb-b2f5-55d1dce2afae] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 23:22:33.518677  539238 system_pods.go:89] "storage-provisioner" [c5e9f540-b456-4ade-ad9f-32f2a5282c3a] Running
	I0926 23:22:33.518691  539238 system_pods.go:126] duration metric: took 589.625493ms to wait for k8s-apps to be running ...
	I0926 23:22:33.518705  539238 system_svc.go:44] waiting for kubelet service to be running ....
	I0926 23:22:33.518763  539238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 23:22:33.531944  539238 system_svc.go:56] duration metric: took 13.225117ms WaitForService to wait for kubelet
	I0926 23:22:33.531979  539238 kubeadm.go:586] duration metric: took 1.164717159s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 23:22:33.532004  539238 node_conditions.go:102] verifying NodePressure condition ...
	I0926 23:22:33.534919  539238 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0926 23:22:33.534954  539238 node_conditions.go:123] node cpu capacity is 8
	I0926 23:22:33.534971  539238 node_conditions.go:105] duration metric: took 2.956742ms to run NodePressure ...
	I0926 23:22:33.534986  539238 start.go:241] waiting for startup goroutines ...
	I0926 23:22:33.535000  539238 start.go:246] waiting for cluster config update ...
	I0926 23:22:33.535022  539238 start.go:255] writing updated cluster config ...
	I0926 23:22:33.535370  539238 ssh_runner.go:195] Run: rm -f paused
	I0926 23:22:33.539181  539238 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0926 23:22:33.543859  539238 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-49j55" in "kube-system" namespace to be "Ready" or be gone ...
	W0926 23:22:35.549246  539238 pod_ready.go:104] pod "coredns-66bc5c9577-49j55" is not "Ready", error: <nil>
	W0926 23:22:37.549996  539238 pod_ready.go:104] pod "coredns-66bc5c9577-49j55" is not "Ready", error: <nil>
	I0926 23:22:39.546591  539238 pod_ready.go:99] pod "coredns-66bc5c9577-49j55" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-49j55" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-49j55" not found
	I0926 23:22:39.546622  539238 pod_ready.go:86] duration metric: took 6.002734328s for pod "coredns-66bc5c9577-49j55" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:39.546636  539238 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-f2bz7" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:39.551172  539238 pod_ready.go:94] pod "coredns-66bc5c9577-f2bz7" is "Ready"
	I0926 23:22:39.551193  539238 pod_ready.go:86] duration metric: took 4.550574ms for pod "coredns-66bc5c9577-f2bz7" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:39.553144  539238 pod_ready.go:83] waiting for pod "etcd-bridge-227717" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:39.556581  539238 pod_ready.go:94] pod "etcd-bridge-227717" is "Ready"
	I0926 23:22:39.556601  539238 pod_ready.go:86] duration metric: took 3.432504ms for pod "etcd-bridge-227717" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:39.558524  539238 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-227717" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:39.562145  539238 pod_ready.go:94] pod "kube-apiserver-bridge-227717" is "Ready"
	I0926 23:22:39.562167  539238 pod_ready.go:86] duration metric: took 3.627142ms for pod "kube-apiserver-bridge-227717" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:39.564099  539238 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-227717" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:39.947447  539238 pod_ready.go:94] pod "kube-controller-manager-bridge-227717" is "Ready"
	I0926 23:22:39.947483  539238 pod_ready.go:86] duration metric: took 383.36072ms for pod "kube-controller-manager-bridge-227717" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:40.147990  539238 pod_ready.go:83] waiting for pod "kube-proxy-47cgp" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:40.547815  539238 pod_ready.go:94] pod "kube-proxy-47cgp" is "Ready"
	I0926 23:22:40.547842  539238 pod_ready.go:86] duration metric: took 399.826814ms for pod "kube-proxy-47cgp" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:40.748134  539238 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-227717" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:41.147443  539238 pod_ready.go:94] pod "kube-scheduler-bridge-227717" is "Ready"
	I0926 23:22:41.147473  539238 pod_ready.go:86] duration metric: took 399.309771ms for pod "kube-scheduler-bridge-227717" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:41.147487  539238 pod_ready.go:40] duration metric: took 7.608272943s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0926 23:22:41.193346  539238 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0926 23:22:41.195201  539238 out.go:179] * Done! kubectl is now configured to use "bridge-227717" cluster and "default" namespace by default
	W0926 23:22:39.565057  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:42.064614  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:44.065719  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:46.065899  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:48.565455  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:50.565543  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:53.064625  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:55.564781  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:57.564857  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:59.565139  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:01.565709  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:04.064862  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:06.065309  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:08.565165  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:10.565349  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:13.064920  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:15.564882  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:17.565258  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:20.064964  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:22.065467  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:24.564804  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:27.064970  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:29.564974  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:31.565357  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:34.065050  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:36.065589  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:38.565336  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:40.565539  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:43.065314  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:45.565284  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:48.065588  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:50.564735  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:52.565357  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:55.064557  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:57.064911  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:59.065244  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:01.565478  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:04.064628  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:06.065321  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:08.065532  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:10.564829  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:13.064754  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:15.065614  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:17.065756  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:19.564841  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:22.065034  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:24.564595  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:26.564894  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:29.064518  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:31.065069  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:33.065637  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:35.564512  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:37.565224  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:40.064891  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:42.564450  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:44.564539  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:46.564928  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:49.064762  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:51.065432  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:53.065715  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:55.564787  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:57.565037  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:00.064879  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:02.564463  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:04.564913  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:06.564947  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:09.065340  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:11.564448  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:13.565280  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:16.065054  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:18.065272  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:20.564625  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:22.564765  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:24.564945  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:27.065003  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:29.564592  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:31.564931  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:34.064581  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:36.064650  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:38.064924  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:40.564651  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:42.564922  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:45.064870  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:47.064953  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:49.564789  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:52.064916  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:54.065074  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:56.065254  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:58.565465  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:01.064837  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:03.064944  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:05.565321  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:07.565411  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:10.065463  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:12.564627  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:14.565619  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:17.065374  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:19.065839  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:21.565515  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:24.065887  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:26.565198  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:28.565808  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:31.065709  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:33.564549  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:35.564934  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:37.565293  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:39.565396  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:42.064557  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:44.064791  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:46.065519  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:48.565396  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:51.065348  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:53.065428  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:55.564828  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:57.564986  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:00.064983  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:02.565254  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:05.065078  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:07.565324  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:10.065527  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:12.564984  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:14.565259  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:17.065067  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:19.065868  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:21.066053  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:23.565113  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:26.065444  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:28.565677  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:31.065416  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:33.564549  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:35.565081  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:38.064917  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:40.064988  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:42.565349  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:45.064763  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:47.065214  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:49.065374  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:51.565478  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:54.064758  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:56.065527  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:58.565674  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:01.064995  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:03.564883  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:06.065272  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:08.565058  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:11.065185  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:13.065451  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:15.564942  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:18.064690  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:20.065053  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:22.564741  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:25.064781  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:27.564948  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:30.065326  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:32.565597  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:35.065606  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:37.564554  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:40.064777  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:42.065566  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:44.564533  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:46.564833  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:49.064915  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:51.565366  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:54.064684  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:56.064852  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:58.564817  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:01.064862  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:03.065394  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:05.565439  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:07.565568  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:10.064768  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:12.565208  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:14.565245  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:17.064833  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:19.564770  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:21.565054  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:24.065056  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:26.564682  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:28.564896  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:30.565041  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:32.565329  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:35.065524  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:37.564885  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:39.565653  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:42.064831  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:44.564953  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:46.565055  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:49.065043  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:51.065368  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:53.065413  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:55.065561  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Sep 26 23:28:32 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:28:32.366240814Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=524b4fab-051f-49d2-b159-d6477f802f53 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:28:41 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:28:41.365436272Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=8df4c720-0e32-4bce-9300-47f4b2471985 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:28:41 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:28:41.365748695Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=8df4c720-0e32-4bce-9300-47f4b2471985 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:28:47 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:28:47.366484240Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=6e5b45fc-0b48-4430-9cdf-40334bcfc7e7 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:28:47 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:28:47.366751879Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=6e5b45fc-0b48-4430-9cdf-40334bcfc7e7 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:28:55 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:28:55.366244221Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=0f2bce55-83f7-488f-8302-614747ae4ad3 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:28:55 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:28:55.366515808Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=0f2bce55-83f7-488f-8302-614747ae4ad3 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:28:59 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:28:59.365524306Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=a9ee1ebe-f16c-421c-9842-e4d6e1784d35 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:28:59 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:28:59.365931309Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=a9ee1ebe-f16c-421c-9842-e4d6e1784d35 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:29:10 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:29:10.365910069Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=abccdc3f-fcf9-4ff3-88c1-1e2d59f81e1b name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:29:10 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:29:10.366150076Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=abccdc3f-fcf9-4ff3-88c1-1e2d59f81e1b name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:29:13 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:29:13.365668719Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=31fabdd4-b6ef-40df-9abb-028a938e76ad name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:29:13 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:29:13.365973722Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=31fabdd4-b6ef-40df-9abb-028a938e76ad name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:29:23 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:29:23.365897158Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=545edf08-bfbc-4375-9375-ce3cd7274c0f name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:29:23 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:29:23.366219576Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=545edf08-bfbc-4375-9375-ce3cd7274c0f name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:29:28 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:29:28.365839276Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=cac0b45e-6a6b-4eaa-a9be-d65c9623c2f1 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:29:28 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:29:28.366110592Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=cac0b45e-6a6b-4eaa-a9be-d65c9623c2f1 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:29:35 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:29:35.365620682Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=3a88ba36-7bd0-41fe-a10f-bb05c74be536 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:29:35 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:29:35.366314115Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=3a88ba36-7bd0-41fe-a10f-bb05c74be536 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:29:39 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:29:39.365726500Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=33347801-7ab1-476f-bf9f-528d5c8de3ce name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:29:39 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:29:39.366043958Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=33347801-7ab1-476f-bf9f-528d5c8de3ce name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:29:39 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:29:39.366648182Z" level=info msg="Pulling image: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=ccc640dd-1f1d-49d2-9e6a-1228538074f5 name=/runtime.v1.ImageService/PullImage
	Sep 26 23:29:39 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:29:39.371141428Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 26 23:29:48 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:29:48.366046535Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=9789f74d-fbf2-446e-89d9-56ec518d483f name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:29:48 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:29:48.366449125Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=9789f74d-fbf2-446e-89d9-56ec518d483f name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	a80c94dcb6dce       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   3 minutes ago       Exited              dashboard-metrics-scraper   6                   65beccd5d966a       dashboard-metrics-scraper-6ffb444bf9-w7gnq
	9bf11225358a2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner         2                   041570f9c31cd       storage-provisioner
	0c3a33e05e709       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   9 minutes ago       Running             coredns                     1                   fdb68aeaed54c       coredns-66bc5c9577-2svp4
	96a7a432e179a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Exited              storage-provisioner         1                   041570f9c31cd       storage-provisioner
	1ea8a4730040a       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c   9 minutes ago       Running             busybox                     1                   3b9acece4505e       busybox
	1fa78b4bbc3cc       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   9 minutes ago       Running             kube-proxy                  1                   370c72854926f       kube-proxy-9nbwg
	fd86762b4522b       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   9 minutes ago       Running             kindnet-cni                 1                   7795ea74236fa       kindnet-qm5t5
	e0edf08c6a8d0       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   9 minutes ago       Running             kube-scheduler              1                   437843520db01       kube-scheduler-default-k8s-diff-port-441435
	21fe9e343c66d       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   9 minutes ago       Running             kube-apiserver              1                   4eda2cd5efb46       kube-apiserver-default-k8s-diff-port-441435
	64c15902266a0       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   9 minutes ago       Running             kube-controller-manager     1                   bc5ae73f49fb9       kube-controller-manager-default-k8s-diff-port-441435
	1d646ab3cd316       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   9 minutes ago       Running             etcd                        1                   2c0e0dcda36d5       etcd-default-k8s-diff-port-441435
	
	
	==> coredns [0c3a33e05e70970067727345c250b9acf323d2e519a614e24d23308c8701221b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55106 - 13743 "HINFO IN 6315080665969330000.7669187035379610219. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.038747375s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-441435
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-441435
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47
	                    minikube.k8s.io/name=default-k8s-diff-port-441435
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_26T23_18_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 26 Sep 2025 23:18:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-441435
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 26 Sep 2025 23:29:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 26 Sep 2025 23:29:49 +0000   Fri, 26 Sep 2025 23:18:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 26 Sep 2025 23:29:49 +0000   Fri, 26 Sep 2025 23:18:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 26 Sep 2025 23:29:49 +0000   Fri, 26 Sep 2025 23:18:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 26 Sep 2025 23:29:49 +0000   Fri, 26 Sep 2025 23:19:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-441435
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 9615023ddc864d09988eb0bc06957254
	  System UUID:                837c0aba-5121-40c5-a1c3-287b72515219
	  Boot ID:                    ce94279f-6745-4217-952d-f9fda1755c22
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-2svp4                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-default-k8s-diff-port-441435                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-qm5t5                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-default-k8s-diff-port-441435             250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-441435    200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-9nbwg                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-default-k8s-diff-port-441435             100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-746fcd58dc-n2fs6                         100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         10m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-w7gnq              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m37s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-hjt5g                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 11m                    kube-proxy       
	  Normal  Starting                 9m40s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  11m                    kubelet          Node default-k8s-diff-port-441435 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                    kubelet          Node default-k8s-diff-port-441435 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                    kubelet          Node default-k8s-diff-port-441435 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           11m                    node-controller  Node default-k8s-diff-port-441435 event: Registered Node default-k8s-diff-port-441435 in Controller
	  Normal  NodeReady                10m                    kubelet          Node default-k8s-diff-port-441435 status is now: NodeReady
	  Normal  Starting                 9m45s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m45s (x8 over 9m45s)  kubelet          Node default-k8s-diff-port-441435 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m45s (x8 over 9m45s)  kubelet          Node default-k8s-diff-port-441435 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m45s (x8 over 9m45s)  kubelet          Node default-k8s-diff-port-441435 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m38s                  node-controller  Node default-k8s-diff-port-441435 event: Registered Node default-k8s-diff-port-441435 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3e 48 6a f6 09 4b 08 06
	[ +10.903979] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a e8 7e d3 5b 72 08 06
	[  +0.000341] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff da 63 22 23 91 7c 08 06
	[  +0.001352] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa ad 0e 7e 3d 1a 08 06
	[ +32.901964] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 52 a1 ed 2f d7 08 06
	[  +0.000406] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e 48 6a f6 09 4b 08 06
	[Sep26 23:22] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 9e c1 c6 a4 45 cb 08 06
	[ +17.540919] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 96 9e d2 16 c1 17 08 06
	[  +0.001348] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a c7 2f db 7f 89 08 06
	[  +4.808582] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 46 38 a7 fc 6c f4 08 06
	[  +0.000403] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9e c1 c6 a4 45 cb 08 06
	[ +13.075040] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de e8 79 e9 6a 4e 08 06
	[  +0.000347] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 96 9e d2 16 c1 17 08 06
	
	
	==> etcd [1d646ab3cd316ea8612e146e135cdfea98b3e83b08e2488d4055108a2e9cd101] <==
	{"level":"info","ts":"2025-09-26T23:20:25.972876Z","caller":"traceutil/trace.go:172","msg":"trace[742503914] range","detail":"{range_begin:/registry/deployments/kubernetes-dashboard/dashboard-metrics-scraper; range_end:; response_count:1; response_revision:588; }","duration":"133.213628ms","start":"2025-09-26T23:20:25.839647Z","end":"2025-09-26T23:20:25.972861Z","steps":["trace[742503914] 'agreement among raft nodes before linearized reading'  (duration: 132.663776ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-26T23:20:25.972571Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.927241ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kubernetes-dashboard/kubernetes-dashboard\" limit:1 ","response":"range_response_count:1 size:4584"}
	{"level":"info","ts":"2025-09-26T23:20:25.972954Z","caller":"traceutil/trace.go:172","msg":"trace[781361727] range","detail":"{range_begin:/registry/deployments/kubernetes-dashboard/kubernetes-dashboard; range_end:; response_count:1; response_revision:587; }","duration":"133.314209ms","start":"2025-09-26T23:20:25.839628Z","end":"2025-09-26T23:20:25.972942Z","steps":["trace[781361727] 'agreement among raft nodes before linearized reading'  (duration: 87.214527ms)","trace[781361727] 'range keys from in-memory index tree'  (duration: 45.630345ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-26T23:20:26.248697Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"185.861576ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w7gnq\" limit:1 ","response":"range_response_count:1 size:2792"}
	{"level":"info","ts":"2025-09-26T23:20:26.248773Z","caller":"traceutil/trace.go:172","msg":"trace[632422809] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w7gnq; range_end:; response_count:1; response_revision:602; }","duration":"185.956805ms","start":"2025-09-26T23:20:26.062799Z","end":"2025-09-26T23:20:26.248756Z","steps":["trace[632422809] 'agreement among raft nodes before linearized reading'  (duration: 39.369808ms)","trace[632422809] 'range keys from in-memory index tree'  (duration: 146.433959ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-26T23:20:26.248843Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"146.553482ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571765092064431741 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/kubernetes-dashboard/dashboard-metrics-scraper\" mod_revision:594 > success:<request_put:<key:\"/registry/deployments/kubernetes-dashboard/dashboard-metrics-scraper\" value_size:4688 >> failure:<request_range:<key:\"/registry/deployments/kubernetes-dashboard/dashboard-metrics-scraper\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-09-26T23:20:26.249199Z","caller":"traceutil/trace.go:172","msg":"trace[982243969] transaction","detail":"{read_only:false; response_revision:604; number_of_response:1; }","duration":"191.168923ms","start":"2025-09-26T23:20:26.058015Z","end":"2025-09-26T23:20:26.249184Z","steps":["trace[982243969] 'process raft request'  (duration: 191.043008ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T23:20:26.249356Z","caller":"traceutil/trace.go:172","msg":"trace[2016466082] transaction","detail":"{read_only:false; response_revision:603; number_of_response:1; }","duration":"191.353752ms","start":"2025-09-26T23:20:26.057991Z","end":"2025-09-26T23:20:26.249345Z","steps":["trace[2016466082] 'process raft request'  (duration: 44.225429ms)","trace[2016466082] 'compare'  (duration: 146.430808ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-26T23:20:42.890070Z","caller":"traceutil/trace.go:172","msg":"trace[1941064881] linearizableReadLoop","detail":"{readStateIndex:664; appliedIndex:664; }","duration":"104.059489ms","start":"2025-09-26T23:20:42.785982Z","end":"2025-09-26T23:20:42.890041Z","steps":["trace[1941064881] 'read index received'  (duration: 104.053033ms)","trace[1941064881] 'applied index is now lower than readState.Index'  (duration: 5.628µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-26T23:20:42.894309Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"108.306702ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-2svp4\" limit:1 ","response":"range_response_count:1 size:5928"}
	{"level":"info","ts":"2025-09-26T23:20:42.894365Z","caller":"traceutil/trace.go:172","msg":"trace[1715597443] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-2svp4; range_end:; response_count:1; response_revision:625; }","duration":"108.375408ms","start":"2025-09-26T23:20:42.785973Z","end":"2025-09-26T23:20:42.894348Z","steps":["trace[1715597443] 'agreement among raft nodes before linearized reading'  (duration: 104.13383ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T23:20:42.894367Z","caller":"traceutil/trace.go:172","msg":"trace[785398395] transaction","detail":"{read_only:false; response_revision:626; number_of_response:1; }","duration":"130.034059ms","start":"2025-09-26T23:20:42.764318Z","end":"2025-09-26T23:20:42.894352Z","steps":["trace[785398395] 'process raft request'  (duration: 125.794582ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T23:21:40.261770Z","caller":"traceutil/trace.go:172","msg":"trace[1456935280] linearizableReadLoop","detail":"{readStateIndex:762; appliedIndex:762; }","duration":"100.806589ms","start":"2025-09-26T23:21:40.160935Z","end":"2025-09-26T23:21:40.261741Z","steps":["trace[1456935280] 'read index received'  (duration: 100.798933ms)","trace[1456935280] 'applied index is now lower than readState.Index'  (duration: 6.684µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-26T23:21:40.261897Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.935154ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-26T23:21:40.261965Z","caller":"traceutil/trace.go:172","msg":"trace[16243359] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:711; }","duration":"101.009729ms","start":"2025-09-26T23:21:40.160927Z","end":"2025-09-26T23:21:40.261937Z","steps":["trace[16243359] 'agreement among raft nodes before linearized reading'  (duration: 100.898225ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T23:21:40.261975Z","caller":"traceutil/trace.go:172","msg":"trace[2060903208] transaction","detail":"{read_only:false; response_revision:712; number_of_response:1; }","duration":"129.905716ms","start":"2025-09-26T23:21:40.132042Z","end":"2025-09-26T23:21:40.261948Z","steps":["trace[2060903208] 'process raft request'  (duration: 129.75294ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-26T23:21:40.443063Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"107.730773ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-26T23:21:40.443160Z","caller":"traceutil/trace.go:172","msg":"trace[1149011100] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:712; }","duration":"107.841127ms","start":"2025-09-26T23:21:40.335302Z","end":"2025-09-26T23:21:40.443143Z","steps":["trace[1149011100] 'range keys from in-memory index tree'  (duration: 107.626066ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-26T23:22:11.595673Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.548576ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571765092064432610 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-746fcd58dc-n2fs6.1868f871dc670f2a\" mod_revision:677 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-746fcd58dc-n2fs6.1868f871dc670f2a\" value_size:694 lease:6571765092064432247 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-746fcd58dc-n2fs6.1868f871dc670f2a\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-09-26T23:22:11.595924Z","caller":"traceutil/trace.go:172","msg":"trace[133391157] transaction","detail":"{read_only:false; response_revision:749; number_of_response:1; }","duration":"129.324958ms","start":"2025-09-26T23:22:11.466574Z","end":"2025-09-26T23:22:11.595899Z","steps":["trace[133391157] 'compare'  (duration: 126.45573ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-26T23:22:12.815360Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"228.85881ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-26T23:22:12.815436Z","caller":"traceutil/trace.go:172","msg":"trace[1508898893] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:750; }","duration":"228.950283ms","start":"2025-09-26T23:22:12.586469Z","end":"2025-09-26T23:22:12.815419Z","steps":["trace[1508898893] 'range keys from in-memory index tree'  (duration: 228.759404ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-26T23:22:12.815368Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.572022ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.94.2\" limit:1 ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2025-09-26T23:22:12.815535Z","caller":"traceutil/trace.go:172","msg":"trace[104734176] range","detail":"{range_begin:/registry/masterleases/192.168.94.2; range_end:; response_count:1; response_revision:750; }","duration":"123.744618ms","start":"2025-09-26T23:22:12.691773Z","end":"2025-09-26T23:22:12.815518Z","steps":["trace[104734176] 'range keys from in-memory index tree'  (duration: 123.399395ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T23:22:13.200982Z","caller":"traceutil/trace.go:172","msg":"trace[547930826] transaction","detail":"{read_only:false; response_revision:753; number_of_response:1; }","duration":"118.418788ms","start":"2025-09-26T23:22:13.082527Z","end":"2025-09-26T23:22:13.200945Z","steps":["trace[547930826] 'process raft request'  (duration: 118.246235ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:30:02 up  3:12,  0 users,  load average: 0.81, 1.18, 3.05
	Linux default-k8s-diff-port-441435 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [fd86762b4522b1c596fbe8975d894d466731398f03f76681cf932e1b4dfea904] <==
	I0926 23:28:02.339152       1 main.go:301] handling current node
	I0926 23:28:12.340167       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0926 23:28:12.340203       1 main.go:301] handling current node
	I0926 23:28:22.330666       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0926 23:28:22.330707       1 main.go:301] handling current node
	I0926 23:28:32.338869       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0926 23:28:32.338916       1 main.go:301] handling current node
	I0926 23:28:42.336327       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0926 23:28:42.336367       1 main.go:301] handling current node
	I0926 23:28:52.331418       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0926 23:28:52.331451       1 main.go:301] handling current node
	I0926 23:29:02.336146       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0926 23:29:02.336199       1 main.go:301] handling current node
	I0926 23:29:12.340227       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0926 23:29:12.340264       1 main.go:301] handling current node
	I0926 23:29:22.330603       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0926 23:29:22.330649       1 main.go:301] handling current node
	I0926 23:29:32.334917       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0926 23:29:32.334967       1 main.go:301] handling current node
	I0926 23:29:42.336200       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0926 23:29:42.336243       1 main.go:301] handling current node
	I0926 23:29:52.330821       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0926 23:29:52.330856       1 main.go:301] handling current node
	I0926 23:30:02.338198       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0926 23:30:02.338239       1 main.go:301] handling current node
	
	
	==> kube-apiserver [21fe9e343c66d826051000d442f68139ba6476d8d897627f45fbaa98c51cd141] <==
	I0926 23:25:35.003284       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0926 23:26:21.560396       1 handler_proxy.go:99] no RequestInfo found in the context
	E0926 23:26:21.560455       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0926 23:26:21.560470       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0926 23:26:21.561563       1 handler_proxy.go:99] no RequestInfo found in the context
	E0926 23:26:21.561669       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0926 23:26:21.561686       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0926 23:26:43.185891       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 23:26:48.074661       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 23:27:48.637680       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 23:28:00.176152       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0926 23:28:21.561295       1 handler_proxy.go:99] no RequestInfo found in the context
	E0926 23:28:21.561352       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0926 23:28:21.561367       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0926 23:28:21.562415       1 handler_proxy.go:99] no RequestInfo found in the context
	E0926 23:28:21.562521       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0926 23:28:21.562540       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0926 23:29:12.048376       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 23:29:15.546533       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [64c15902266a0928888ac2fd8de2a1006cf6d55f71d5294f6c4bdffe76988b44] <==
	I0926 23:23:55.032125       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0926 23:24:25.002465       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0926 23:24:25.039588       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0926 23:24:55.007230       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0926 23:24:55.045990       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0926 23:25:25.011720       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0926 23:25:25.053985       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0926 23:25:55.016147       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0926 23:25:55.060424       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0926 23:26:25.021833       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0926 23:26:25.068469       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0926 23:26:55.027454       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0926 23:26:55.075336       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0926 23:27:25.032205       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0926 23:27:25.082290       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0926 23:27:55.036777       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0926 23:27:55.088937       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0926 23:28:25.041442       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0926 23:28:25.095881       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0926 23:28:55.046260       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0926 23:28:55.103478       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0926 23:29:25.050679       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0926 23:29:25.109870       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0926 23:29:55.055451       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0926 23:29:55.117231       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [1fa78b4bbc3cc848294e5ae05f1aed1b0d0d54ffd25adc12d14666d297c4d06a] <==
	I0926 23:20:21.917321       1 server_linux.go:53] "Using iptables proxy"
	I0926 23:20:21.979131       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0926 23:20:22.080259       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0926 23:20:22.080307       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E0926 23:20:22.080385       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0926 23:20:22.100249       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0926 23:20:22.100326       1 server_linux.go:132] "Using iptables Proxier"
	I0926 23:20:22.105683       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0926 23:20:22.106548       1 server.go:527] "Version info" version="v1.34.0"
	I0926 23:20:22.106575       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 23:20:22.109072       1 config.go:309] "Starting node config controller"
	I0926 23:20:22.109104       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0926 23:20:22.109113       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0926 23:20:22.109350       1 config.go:403] "Starting serviceCIDR config controller"
	I0926 23:20:22.109359       1 config.go:200] "Starting service config controller"
	I0926 23:20:22.109362       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0926 23:20:22.109366       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0926 23:20:22.109382       1 config.go:106] "Starting endpoint slice config controller"
	I0926 23:20:22.109388       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0926 23:20:22.209477       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0926 23:20:22.209507       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0926 23:20:22.209485       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [e0edf08c6a8d03aecd750f5c9c512d8c79264b69e219947b80910ae37c4b980a] <==
	I0926 23:20:18.567543       1 serving.go:386] Generated self-signed cert in-memory
	W0926 23:20:20.544902       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0926 23:20:20.544941       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0926 23:20:20.544954       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0926 23:20:20.544986       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0926 23:20:20.586189       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0926 23:20:20.586294       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 23:20:20.590799       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0926 23:20:20.590937       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 23:20:20.590993       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0926 23:20:20.607719       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 23:20:20.631005       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 26 23:29:07 default-k8s-diff-port-441435 kubelet[689]: E0926 23:29:07.427988     689 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758929347427682026  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 26 23:29:10 default-k8s-diff-port-441435 kubelet[689]: E0926 23:29:10.366507     689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-n2fs6" podUID="94cb3f4e-8d94-49b7-849d-e5ae5a749431"
	Sep 26 23:29:13 default-k8s-diff-port-441435 kubelet[689]: E0926 23:29:13.366317     689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hjt5g" podUID="f3e14755-2887-45b0-be6b-6ce721ec83dc"
	Sep 26 23:29:15 default-k8s-diff-port-441435 kubelet[689]: I0926 23:29:15.365468     689 scope.go:117] "RemoveContainer" containerID="a80c94dcb6dce01dabf7312ec15ce000293b5ce6699099a3bdd3f8e9ada33cb5"
	Sep 26 23:29:15 default-k8s-diff-port-441435 kubelet[689]: E0926 23:29:15.365706     689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-w7gnq_kubernetes-dashboard(d62b2960-6886-4571-8569-886c377485d9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w7gnq" podUID="d62b2960-6886-4571-8569-886c377485d9"
	Sep 26 23:29:17 default-k8s-diff-port-441435 kubelet[689]: E0926 23:29:17.429576     689 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758929357429355064  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 26 23:29:17 default-k8s-diff-port-441435 kubelet[689]: E0926 23:29:17.429621     689 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758929357429355064  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 26 23:29:23 default-k8s-diff-port-441435 kubelet[689]: E0926 23:29:23.366640     689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-n2fs6" podUID="94cb3f4e-8d94-49b7-849d-e5ae5a749431"
	Sep 26 23:29:27 default-k8s-diff-port-441435 kubelet[689]: E0926 23:29:27.431460     689 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758929367431226427  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 26 23:29:27 default-k8s-diff-port-441435 kubelet[689]: E0926 23:29:27.431505     689 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758929367431226427  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 26 23:29:28 default-k8s-diff-port-441435 kubelet[689]: E0926 23:29:28.366510     689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hjt5g" podUID="f3e14755-2887-45b0-be6b-6ce721ec83dc"
	Sep 26 23:29:30 default-k8s-diff-port-441435 kubelet[689]: I0926 23:29:30.365623     689 scope.go:117] "RemoveContainer" containerID="a80c94dcb6dce01dabf7312ec15ce000293b5ce6699099a3bdd3f8e9ada33cb5"
	Sep 26 23:29:30 default-k8s-diff-port-441435 kubelet[689]: E0926 23:29:30.365812     689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-w7gnq_kubernetes-dashboard(d62b2960-6886-4571-8569-886c377485d9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w7gnq" podUID="d62b2960-6886-4571-8569-886c377485d9"
	Sep 26 23:29:35 default-k8s-diff-port-441435 kubelet[689]: E0926 23:29:35.367480     689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-n2fs6" podUID="94cb3f4e-8d94-49b7-849d-e5ae5a749431"
	Sep 26 23:29:37 default-k8s-diff-port-441435 kubelet[689]: E0926 23:29:37.432615     689 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758929377432337285  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 26 23:29:37 default-k8s-diff-port-441435 kubelet[689]: E0926 23:29:37.432659     689 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758929377432337285  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 26 23:29:43 default-k8s-diff-port-441435 kubelet[689]: I0926 23:29:43.365614     689 scope.go:117] "RemoveContainer" containerID="a80c94dcb6dce01dabf7312ec15ce000293b5ce6699099a3bdd3f8e9ada33cb5"
	Sep 26 23:29:43 default-k8s-diff-port-441435 kubelet[689]: E0926 23:29:43.365851     689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-w7gnq_kubernetes-dashboard(d62b2960-6886-4571-8569-886c377485d9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w7gnq" podUID="d62b2960-6886-4571-8569-886c377485d9"
	Sep 26 23:29:47 default-k8s-diff-port-441435 kubelet[689]: E0926 23:29:47.434434     689 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758929387434194972  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 26 23:29:47 default-k8s-diff-port-441435 kubelet[689]: E0926 23:29:47.434478     689 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758929387434194972  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 26 23:29:48 default-k8s-diff-port-441435 kubelet[689]: E0926 23:29:48.366827     689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-n2fs6" podUID="94cb3f4e-8d94-49b7-849d-e5ae5a749431"
	Sep 26 23:29:57 default-k8s-diff-port-441435 kubelet[689]: E0926 23:29:57.435778     689 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758929397435513405  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 26 23:29:57 default-k8s-diff-port-441435 kubelet[689]: E0926 23:29:57.435814     689 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758929397435513405  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 26 23:29:58 default-k8s-diff-port-441435 kubelet[689]: I0926 23:29:58.364959     689 scope.go:117] "RemoveContainer" containerID="a80c94dcb6dce01dabf7312ec15ce000293b5ce6699099a3bdd3f8e9ada33cb5"
	Sep 26 23:29:58 default-k8s-diff-port-441435 kubelet[689]: E0926 23:29:58.365204     689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-w7gnq_kubernetes-dashboard(d62b2960-6886-4571-8569-886c377485d9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w7gnq" podUID="d62b2960-6886-4571-8569-886c377485d9"
	
	
	==> storage-provisioner [96a7a432e179ab0bb5840b3bbbd120003b450916d554d40c62aa9613d6afe25a] <==
	I0926 23:20:21.910853       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0926 23:20:51.914552       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [9bf11225358a2ee7799f2fb960a31d828a307b994fa48ec323c5f0c2fb6be477] <==
	W0926 23:29:38.136221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:29:40.139222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:29:40.143193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:29:42.146461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:29:42.150353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:29:44.153620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:29:44.157745       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:29:46.160873       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:29:46.166234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:29:48.169733       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:29:48.173551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:29:50.176621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:29:50.180603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:29:52.184364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:29:52.188180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:29:54.191627       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:29:54.196482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:29:56.199969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:29:56.203936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:29:58.207674       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:29:58.211599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:30:00.214326       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:30:00.219160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:30:02.223018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:30:02.226859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-441435 -n default-k8s-diff-port-441435
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-441435 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-n2fs6 kubernetes-dashboard-855c9754f9-hjt5g
helpers_test.go:282: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-441435 describe pod metrics-server-746fcd58dc-n2fs6 kubernetes-dashboard-855c9754f9-hjt5g
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-441435 describe pod metrics-server-746fcd58dc-n2fs6 kubernetes-dashboard-855c9754f9-hjt5g: exit status 1 (59.140731ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-n2fs6" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-hjt5g" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context default-k8s-diff-port-441435 describe pod metrics-server-746fcd58dc-n2fs6 kubernetes-dashboard-855c9754f9-hjt5g: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hjt5g" [f3e14755-2887-45b0-be6b-6ce721ec83dc] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0926 23:30:04.113023  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/kindnet-227717/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:30:05.992431  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/flannel-227717/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:30:20.964126  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/auto-227717/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:30:25.515492  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:30:31.815063  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/kindnet-227717/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:31:07.769847  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/custom-flannel-227717/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:31:35.473236  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/custom-flannel-227717/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:31:39.109339  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/enable-default-cni-227717/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:32:05.775921  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/old-k8s-version-413278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:32:06.812241  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/enable-default-cni-227717/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:32:12.143526  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:32:19.818114  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/no-preload-639473/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:32:22.131907  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/flannel-227717/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:32:41.655329  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:32:49.834550  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/flannel-227717/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:33:09.357504  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:33:43.694354  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/functional-383702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:285: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-441435 -n default-k8s-diff-port-441435
start_stop_delete_test.go:285: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-09-26 23:39:03.435259702 +0000 UTC m=+4181.345195976
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-441435 describe po kubernetes-dashboard-855c9754f9-hjt5g -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) kubectl --context default-k8s-diff-port-441435 describe po kubernetes-dashboard-855c9754f9-hjt5g -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-hjt5g
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             default-k8s-diff-port-441435/192.168.94.2
Start Time:       Fri, 26 Sep 2025 23:20:26 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2vq4r (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-2vq4r:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  18m                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hjt5g to default-k8s-diff-port-441435
Warning  Failed     16m (x2 over 18m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    13m (x5 over 18m)     kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     12m (x5 over 18m)     kubelet            Error: ErrImagePull
Warning  Failed     12m (x3 over 17m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": loading manifest for target platform: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    2m59s (x48 over 18m)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     2m59s (x48 over 18m)  kubelet            Error: ImagePullBackOff
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-441435 logs kubernetes-dashboard-855c9754f9-hjt5g -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-441435 logs kubernetes-dashboard-855c9754f9-hjt5g -n kubernetes-dashboard: exit status 1 (68.346153ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-hjt5g" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:285: kubectl --context default-k8s-diff-port-441435 logs kubernetes-dashboard-855c9754f9-hjt5g -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-441435 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-441435
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-441435:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "faff2fd42df2d5e9047db5fb9d655933514f60139430dd9aeb602219242a3147",
	        "Created": "2025-09-26T23:18:36.870642262Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 509025,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-26T23:20:10.302958143Z",
	            "FinishedAt": "2025-09-26T23:20:09.423908983Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/faff2fd42df2d5e9047db5fb9d655933514f60139430dd9aeb602219242a3147/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/faff2fd42df2d5e9047db5fb9d655933514f60139430dd9aeb602219242a3147/hostname",
	        "HostsPath": "/var/lib/docker/containers/faff2fd42df2d5e9047db5fb9d655933514f60139430dd9aeb602219242a3147/hosts",
	        "LogPath": "/var/lib/docker/containers/faff2fd42df2d5e9047db5fb9d655933514f60139430dd9aeb602219242a3147/faff2fd42df2d5e9047db5fb9d655933514f60139430dd9aeb602219242a3147-json.log",
	        "Name": "/default-k8s-diff-port-441435",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-441435:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-441435",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "faff2fd42df2d5e9047db5fb9d655933514f60139430dd9aeb602219242a3147",
	                "LowerDir": "/var/lib/docker/overlay2/f25f005a60672920cf81f4a02033f3002103e9e07337b8cfc62597d68a468832-init/diff:/var/lib/docker/overlay2/539bc53ffef0f27e9ac4c376a14359e91e4b4c4b56d5675ca6caeaaab94e33fb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f25f005a60672920cf81f4a02033f3002103e9e07337b8cfc62597d68a468832/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f25f005a60672920cf81f4a02033f3002103e9e07337b8cfc62597d68a468832/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f25f005a60672920cf81f4a02033f3002103e9e07337b8cfc62597d68a468832/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-441435",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-441435/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-441435",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-441435",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-441435",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fe1778a957bdd3a1569d5b718442bd0be6d75eacaea1973c3697b05f5c62194f",
	            "SandboxKey": "/var/run/docker/netns/fe1778a957bd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-441435": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:29:40:e4:a2:40",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7f3a1c78f885aa3cc6f148623dfbda6420751a6ff0ff6f75cff0c1de9224dfed",
	                    "EndpointID": "bedf54f72fd29de78316353893340b8a723c4cfeb59df455f8358005708f3b95",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-441435",
	                        "faff2fd42df2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-441435 -n default-k8s-diff-port-441435
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-441435 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-441435 logs -n 25: (1.203164553s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────┬───────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                      │    PROFILE    │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────┼───────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p calico-227717 sudo iptables -t nat -L -n -v                                 │ calico-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:34 UTC │ 26 Sep 25 23:34 UTC │
	│ ssh     │ -p calico-227717 sudo systemctl status kubelet --all --full --no-pager         │ calico-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:34 UTC │ 26 Sep 25 23:34 UTC │
	│ ssh     │ -p calico-227717 sudo systemctl cat kubelet --no-pager                         │ calico-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:34 UTC │ 26 Sep 25 23:34 UTC │
	│ ssh     │ -p calico-227717 sudo journalctl -xeu kubelet --all --full --no-pager          │ calico-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:34 UTC │ 26 Sep 25 23:34 UTC │
	│ ssh     │ -p calico-227717 sudo cat /etc/kubernetes/kubelet.conf                         │ calico-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:34 UTC │ 26 Sep 25 23:34 UTC │
	│ ssh     │ -p calico-227717 sudo cat /var/lib/kubelet/config.yaml                         │ calico-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:34 UTC │ 26 Sep 25 23:34 UTC │
	│ ssh     │ -p calico-227717 sudo systemctl status docker --all --full --no-pager          │ calico-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:34 UTC │                     │
	│ ssh     │ -p calico-227717 sudo systemctl cat docker --no-pager                          │ calico-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:34 UTC │ 26 Sep 25 23:34 UTC │
	│ ssh     │ -p calico-227717 sudo cat /etc/docker/daemon.json                              │ calico-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:34 UTC │                     │
	│ ssh     │ -p calico-227717 sudo docker system info                                       │ calico-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:35 UTC │                     │
	│ ssh     │ -p calico-227717 sudo systemctl status cri-docker --all --full --no-pager      │ calico-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:35 UTC │                     │
	│ ssh     │ -p calico-227717 sudo systemctl cat cri-docker --no-pager                      │ calico-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:35 UTC │ 26 Sep 25 23:35 UTC │
	│ ssh     │ -p calico-227717 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ calico-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:35 UTC │                     │
	│ ssh     │ -p calico-227717 sudo cat /usr/lib/systemd/system/cri-docker.service           │ calico-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:35 UTC │ 26 Sep 25 23:35 UTC │
	│ ssh     │ -p calico-227717 sudo cri-dockerd --version                                    │ calico-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:35 UTC │ 26 Sep 25 23:35 UTC │
	│ ssh     │ -p calico-227717 sudo systemctl status containerd --all --full --no-pager      │ calico-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:35 UTC │                     │
	│ ssh     │ -p calico-227717 sudo systemctl cat containerd --no-pager                      │ calico-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:35 UTC │ 26 Sep 25 23:35 UTC │
	│ ssh     │ -p calico-227717 sudo cat /lib/systemd/system/containerd.service               │ calico-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:35 UTC │ 26 Sep 25 23:35 UTC │
	│ ssh     │ -p calico-227717 sudo cat /etc/containerd/config.toml                          │ calico-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:35 UTC │ 26 Sep 25 23:35 UTC │
	│ ssh     │ -p calico-227717 sudo containerd config dump                                   │ calico-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:35 UTC │ 26 Sep 25 23:35 UTC │
	│ ssh     │ -p calico-227717 sudo systemctl status crio --all --full --no-pager            │ calico-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:35 UTC │ 26 Sep 25 23:35 UTC │
	│ ssh     │ -p calico-227717 sudo systemctl cat crio --no-pager                            │ calico-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:35 UTC │ 26 Sep 25 23:35 UTC │
	│ ssh     │ -p calico-227717 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ calico-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:35 UTC │ 26 Sep 25 23:35 UTC │
	│ ssh     │ -p calico-227717 sudo crio config                                              │ calico-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:35 UTC │ 26 Sep 25 23:35 UTC │
	│ delete  │ -p calico-227717                                                               │ calico-227717 │ jenkins │ v1.37.0 │ 26 Sep 25 23:35 UTC │ 26 Sep 25 23:35 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────┴───────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/26 23:22:08
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 23:22:08.279028  539238 out.go:360] Setting OutFile to fd 1 ...
	I0926 23:22:08.279349  539238 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:22:08.279360  539238 out.go:374] Setting ErrFile to fd 2...
	I0926 23:22:08.279364  539238 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:22:08.279528  539238 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-208519/.minikube/bin
	I0926 23:22:08.280079  539238 out.go:368] Setting JSON to false
	I0926 23:22:08.281299  539238 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":11077,"bootTime":1758917851,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 23:22:08.281388  539238 start.go:140] virtualization: kvm guest
	I0926 23:22:08.283540  539238 out.go:179] * [bridge-227717] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0926 23:22:08.285043  539238 out.go:179]   - MINIKUBE_LOCATION=21642
	I0926 23:22:08.285059  539238 notify.go:220] Checking for updates...
	I0926 23:22:08.287982  539238 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 23:22:08.289436  539238 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21642-208519/kubeconfig
	I0926 23:22:08.290681  539238 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-208519/.minikube
	I0926 23:22:08.292054  539238 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0926 23:22:08.293273  539238 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 23:22:08.295050  539238 config.go:182] Loaded profile config "calico-227717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:22:08.295193  539238 config.go:182] Loaded profile config "default-k8s-diff-port-441435": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:22:08.295291  539238 config.go:182] Loaded profile config "flannel-227717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:22:08.295460  539238 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 23:22:08.319504  539238 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0926 23:22:08.319654  539238 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 23:22:08.375935  539238 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-26 23:22:08.36470223 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 23:22:08.376119  539238 docker.go:318] overlay module found
	I0926 23:22:08.378140  539238 out.go:179] * Using the docker driver based on user configuration
	I0926 23:22:08.379639  539238 start.go:304] selected driver: docker
	I0926 23:22:08.379662  539238 start.go:924] validating driver "docker" against <nil>
	I0926 23:22:08.379677  539238 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 23:22:08.380420  539238 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 23:22:08.437454  539238 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-26 23:22:08.427736807 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 23:22:08.437614  539238 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0926 23:22:08.437845  539238 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 23:22:08.439688  539238 out.go:179] * Using Docker driver with root privileges
	I0926 23:22:08.441008  539238 cni.go:84] Creating CNI manager for "bridge"
	I0926 23:22:08.441030  539238 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0926 23:22:08.441130  539238 start.go:348] cluster config:
	{Name:bridge-227717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:bridge-227717 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterva
l:1m0s}
	I0926 23:22:08.442534  539238 out.go:179] * Starting "bridge-227717" primary control-plane node in "bridge-227717" cluster
	I0926 23:22:08.443844  539238 cache.go:123] Beginning downloading kic base image for docker with crio
	I0926 23:22:08.445170  539238 out.go:179] * Pulling base image v0.0.48 ...
	I0926 23:22:08.446359  539238 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0926 23:22:08.446397  539238 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0926 23:22:08.446404  539238 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21642-208519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0926 23:22:08.446414  539238 cache.go:58] Caching tarball of preloaded images
	I0926 23:22:08.446520  539238 preload.go:172] Found /home/jenkins/minikube-integration/21642-208519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0926 23:22:08.446534  539238 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0926 23:22:08.446643  539238 profile.go:143] Saving config to /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/config.json ...
	I0926 23:22:08.446667  539238 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/config.json: {Name:mk96aa4c4d7cc09ca7898d9a34b38afcf66f305a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:22:08.467252  539238 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0926 23:22:08.467269  539238 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0926 23:22:08.467284  539238 cache.go:232] Successfully downloaded all kic artifacts
	I0926 23:22:08.467317  539238 start.go:360] acquireMachinesLock for bridge-227717: {Name:mkeb267a799f13412ae5263736c628e51911a08b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 23:22:08.467417  539238 start.go:364] duration metric: took 81.336µs to acquireMachinesLock for "bridge-227717"
	I0926 23:22:08.467450  539238 start.go:93] Provisioning new machine with config: &{Name:bridge-227717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:bridge-227717 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0926 23:22:08.467561  539238 start.go:125] createHost starting for "" (driver="docker")
	I0926 23:22:05.943549  531066 system_pods.go:86] 7 kube-system pods found
	I0926 23:22:05.943587  531066 system_pods.go:89] "coredns-66bc5c9577-72ld9" [85e13695-188f-4f44-a5c4-6bce9f99c7e8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:22:05.943598  531066 system_pods.go:89] "etcd-flannel-227717" [3e96285c-a07e-43f2-8830-6eb0a1cee44b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:22:05.943606  531066 system_pods.go:89] "kube-apiserver-flannel-227717" [f8238904-4a3d-4074-aade-0c7897ff766f] Running
	I0926 23:22:05.943617  531066 system_pods.go:89] "kube-controller-manager-flannel-227717" [30e2c364-b847-48d4-a3c4-205af85cadbd] Running
	I0926 23:22:05.943626  531066 system_pods.go:89] "kube-proxy-94chj" [daf7349b-f564-44d5-a975-71afc5121328] Running
	I0926 23:22:05.943632  531066 system_pods.go:89] "kube-scheduler-flannel-227717" [acd16790-6d30-4d2c-9ac1-2213b7e4bbf0] Running
	I0926 23:22:05.943638  531066 system_pods.go:89] "storage-provisioner" [d6a5cbce-7387-48e7-9cae-7f1e0f1f7ea3] Running
	I0926 23:22:05.943675  531066 retry.go:31] will retry after 838.406774ms: missing components: kube-dns
	I0926 23:22:06.785872  531066 system_pods.go:86] 7 kube-system pods found
	I0926 23:22:06.785916  531066 system_pods.go:89] "coredns-66bc5c9577-72ld9" [85e13695-188f-4f44-a5c4-6bce9f99c7e8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:22:06.785946  531066 system_pods.go:89] "etcd-flannel-227717" [3e96285c-a07e-43f2-8830-6eb0a1cee44b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:22:06.785955  531066 system_pods.go:89] "kube-apiserver-flannel-227717" [f8238904-4a3d-4074-aade-0c7897ff766f] Running
	I0926 23:22:06.785963  531066 system_pods.go:89] "kube-controller-manager-flannel-227717" [30e2c364-b847-48d4-a3c4-205af85cadbd] Running
	I0926 23:22:06.785973  531066 system_pods.go:89] "kube-proxy-94chj" [daf7349b-f564-44d5-a975-71afc5121328] Running
	I0926 23:22:06.785979  531066 system_pods.go:89] "kube-scheduler-flannel-227717" [acd16790-6d30-4d2c-9ac1-2213b7e4bbf0] Running
	I0926 23:22:06.785985  531066 system_pods.go:89] "storage-provisioner" [d6a5cbce-7387-48e7-9cae-7f1e0f1f7ea3] Running
	I0926 23:22:06.786009  531066 retry.go:31] will retry after 896.684906ms: missing components: kube-dns
	I0926 23:22:07.686824  531066 system_pods.go:86] 7 kube-system pods found
	I0926 23:22:07.686860  531066 system_pods.go:89] "coredns-66bc5c9577-72ld9" [85e13695-188f-4f44-a5c4-6bce9f99c7e8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:22:07.686868  531066 system_pods.go:89] "etcd-flannel-227717" [3e96285c-a07e-43f2-8830-6eb0a1cee44b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:22:07.686873  531066 system_pods.go:89] "kube-apiserver-flannel-227717" [f8238904-4a3d-4074-aade-0c7897ff766f] Running
	I0926 23:22:07.686881  531066 system_pods.go:89] "kube-controller-manager-flannel-227717" [30e2c364-b847-48d4-a3c4-205af85cadbd] Running
	I0926 23:22:07.686887  531066 system_pods.go:89] "kube-proxy-94chj" [daf7349b-f564-44d5-a975-71afc5121328] Running
	I0926 23:22:07.686892  531066 system_pods.go:89] "kube-scheduler-flannel-227717" [acd16790-6d30-4d2c-9ac1-2213b7e4bbf0] Running
	I0926 23:22:07.686897  531066 system_pods.go:89] "storage-provisioner" [d6a5cbce-7387-48e7-9cae-7f1e0f1f7ea3] Running
	I0926 23:22:07.686916  531066 retry.go:31] will retry after 1.836710124s: missing components: kube-dns
	I0926 23:22:09.528120  531066 system_pods.go:86] 7 kube-system pods found
	I0926 23:22:09.528157  531066 system_pods.go:89] "coredns-66bc5c9577-72ld9" [85e13695-188f-4f44-a5c4-6bce9f99c7e8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:22:09.528164  531066 system_pods.go:89] "etcd-flannel-227717" [3e96285c-a07e-43f2-8830-6eb0a1cee44b] Running
	I0926 23:22:09.528171  531066 system_pods.go:89] "kube-apiserver-flannel-227717" [f8238904-4a3d-4074-aade-0c7897ff766f] Running
	I0926 23:22:09.528175  531066 system_pods.go:89] "kube-controller-manager-flannel-227717" [30e2c364-b847-48d4-a3c4-205af85cadbd] Running
	I0926 23:22:09.528180  531066 system_pods.go:89] "kube-proxy-94chj" [daf7349b-f564-44d5-a975-71afc5121328] Running
	I0926 23:22:09.528186  531066 system_pods.go:89] "kube-scheduler-flannel-227717" [acd16790-6d30-4d2c-9ac1-2213b7e4bbf0] Running
	I0926 23:22:09.528191  531066 system_pods.go:89] "storage-provisioner" [d6a5cbce-7387-48e7-9cae-7f1e0f1f7ea3] Running
	I0926 23:22:09.528214  531066 retry.go:31] will retry after 1.67750311s: missing components: kube-dns
	W0926 23:22:09.065330  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:11.130757  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	I0926 23:22:08.469282  539238 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0926 23:22:08.469472  539238 start.go:159] libmachine.API.Create for "bridge-227717" (driver="docker")
	I0926 23:22:08.469500  539238 client.go:168] LocalClient.Create starting
	I0926 23:22:08.469578  539238 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21642-208519/.minikube/certs/ca.pem
	I0926 23:22:08.469605  539238 main.go:141] libmachine: Decoding PEM data...
	I0926 23:22:08.469618  539238 main.go:141] libmachine: Parsing certificate...
	I0926 23:22:08.469708  539238 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21642-208519/.minikube/certs/cert.pem
	I0926 23:22:08.469734  539238 main.go:141] libmachine: Decoding PEM data...
	I0926 23:22:08.469748  539238 main.go:141] libmachine: Parsing certificate...
	I0926 23:22:08.470124  539238 cli_runner.go:164] Run: docker network inspect bridge-227717 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0926 23:22:08.486145  539238 cli_runner.go:211] docker network inspect bridge-227717 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0926 23:22:08.486219  539238 network_create.go:284] running [docker network inspect bridge-227717] to gather additional debugging logs...
	I0926 23:22:08.486245  539238 cli_runner.go:164] Run: docker network inspect bridge-227717
	W0926 23:22:08.501901  539238 cli_runner.go:211] docker network inspect bridge-227717 returned with exit code 1
	I0926 23:22:08.501932  539238 network_create.go:287] error running [docker network inspect bridge-227717]: docker network inspect bridge-227717: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network bridge-227717 not found
	I0926 23:22:08.501949  539238 network_create.go:289] output of [docker network inspect bridge-227717]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network bridge-227717 not found
	
	** /stderr **
	I0926 23:22:08.502033  539238 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0926 23:22:08.518845  539238 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-61b47db54300 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:16:5a:0f:e5:da:60} reservation:<nil>}
	I0926 23:22:08.519525  539238 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-d81bcc6cb1d8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:92:9e:9a:18:c3:8e} reservation:<nil>}
	I0926 23:22:08.520447  539238 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b6dea4b9b493 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:62:a1:51:b0:46:1c} reservation:<nil>}
	I0926 23:22:08.521576  539238 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cc95b0}
	I0926 23:22:08.521607  539238 network_create.go:124] attempt to create docker network bridge-227717 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0926 23:22:08.521659  539238 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-227717 bridge-227717
	I0926 23:22:08.580593  539238 network_create.go:108] docker network bridge-227717 192.168.76.0/24 created
	I0926 23:22:08.580622  539238 kic.go:121] calculated static IP "192.168.76.2" for the "bridge-227717" container
	I0926 23:22:08.580696  539238 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0926 23:22:08.598609  539238 cli_runner.go:164] Run: docker volume create bridge-227717 --label name.minikube.sigs.k8s.io=bridge-227717 --label created_by.minikube.sigs.k8s.io=true
	I0926 23:22:08.618045  539238 oci.go:103] Successfully created a docker volume bridge-227717
	I0926 23:22:08.618135  539238 cli_runner.go:164] Run: docker run --rm --name bridge-227717-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-227717 --entrypoint /usr/bin/test -v bridge-227717:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0926 23:22:08.988380  539238 oci.go:107] Successfully prepared a docker volume bridge-227717
	I0926 23:22:08.988423  539238 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0926 23:22:08.988444  539238 kic.go:194] Starting extracting preloaded images to volume ...
	I0926 23:22:08.988505  539238 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21642-208519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v bridge-227717:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0926 23:22:13.240265  539238 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21642-208519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v bridge-227717:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.251709332s)
	I0926 23:22:13.240299  539238 kic.go:203] duration metric: took 4.251851632s to extract preloaded images to volume ...
	W0926 23:22:13.240391  539238 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0926 23:22:13.240425  539238 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0926 23:22:13.240500  539238 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0926 23:22:11.209647  531066 system_pods.go:86] 7 kube-system pods found
	I0926 23:22:11.209695  531066 system_pods.go:89] "coredns-66bc5c9577-72ld9" [85e13695-188f-4f44-a5c4-6bce9f99c7e8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:22:11.209704  531066 system_pods.go:89] "etcd-flannel-227717" [3e96285c-a07e-43f2-8830-6eb0a1cee44b] Running
	I0926 23:22:11.209715  531066 system_pods.go:89] "kube-apiserver-flannel-227717" [f8238904-4a3d-4074-aade-0c7897ff766f] Running
	I0926 23:22:11.209722  531066 system_pods.go:89] "kube-controller-manager-flannel-227717" [30e2c364-b847-48d4-a3c4-205af85cadbd] Running
	I0926 23:22:11.209732  531066 system_pods.go:89] "kube-proxy-94chj" [daf7349b-f564-44d5-a975-71afc5121328] Running
	I0926 23:22:11.209738  531066 system_pods.go:89] "kube-scheduler-flannel-227717" [acd16790-6d30-4d2c-9ac1-2213b7e4bbf0] Running
	I0926 23:22:11.209746  531066 system_pods.go:89] "storage-provisioner" [d6a5cbce-7387-48e7-9cae-7f1e0f1f7ea3] Running
	I0926 23:22:11.209765  531066 retry.go:31] will retry after 2.403673484s: missing components: kube-dns
	I0926 23:22:13.620151  531066 system_pods.go:86] 7 kube-system pods found
	I0926 23:22:13.620193  531066 system_pods.go:89] "coredns-66bc5c9577-72ld9" [85e13695-188f-4f44-a5c4-6bce9f99c7e8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:22:13.620202  531066 system_pods.go:89] "etcd-flannel-227717" [3e96285c-a07e-43f2-8830-6eb0a1cee44b] Running
	I0926 23:22:13.620211  531066 system_pods.go:89] "kube-apiserver-flannel-227717" [f8238904-4a3d-4074-aade-0c7897ff766f] Running
	I0926 23:22:13.620217  531066 system_pods.go:89] "kube-controller-manager-flannel-227717" [30e2c364-b847-48d4-a3c4-205af85cadbd] Running
	I0926 23:22:13.620226  531066 system_pods.go:89] "kube-proxy-94chj" [daf7349b-f564-44d5-a975-71afc5121328] Running
	I0926 23:22:13.620232  531066 system_pods.go:89] "kube-scheduler-flannel-227717" [acd16790-6d30-4d2c-9ac1-2213b7e4bbf0] Running
	I0926 23:22:13.620237  531066 system_pods.go:89] "storage-provisioner" [d6a5cbce-7387-48e7-9cae-7f1e0f1f7ea3] Running
	I0926 23:22:13.620256  531066 retry.go:31] will retry after 2.413412869s: missing components: kube-dns
	I0926 23:22:13.294455  539238 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname bridge-227717 --name bridge-227717 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-227717 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=bridge-227717 --network bridge-227717 --ip 192.168.76.2 --volume bridge-227717:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0926 23:22:13.567341  539238 cli_runner.go:164] Run: docker container inspect bridge-227717 --format={{.State.Running}}
	I0926 23:22:13.585500  539238 cli_runner.go:164] Run: docker container inspect bridge-227717 --format={{.State.Status}}
	I0926 23:22:13.604156  539238 cli_runner.go:164] Run: docker exec bridge-227717 stat /var/lib/dpkg/alternatives/iptables
	I0926 23:22:13.650955  539238 oci.go:144] the created container "bridge-227717" has a running status.
	I0926 23:22:13.650986  539238 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21642-208519/.minikube/machines/bridge-227717/id_rsa...
	I0926 23:22:13.741225  539238 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21642-208519/.minikube/machines/bridge-227717/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0926 23:22:13.768466  539238 cli_runner.go:164] Run: docker container inspect bridge-227717 --format={{.State.Status}}
	I0926 23:22:13.789477  539238 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0926 23:22:13.789506  539238 kic_runner.go:114] Args: [docker exec --privileged bridge-227717 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0926 23:22:13.845920  539238 cli_runner.go:164] Run: docker container inspect bridge-227717 --format={{.State.Status}}
	I0926 23:22:13.867558  539238 machine.go:93] provisionDockerMachine start ...
	I0926 23:22:13.867669  539238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-227717
	I0926 23:22:13.889876  539238 main.go:141] libmachine: Using SSH client type: native
	I0926 23:22:13.890267  539238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0926 23:22:13.890291  539238 main.go:141] libmachine: About to run SSH command:
	hostname
	I0926 23:22:14.033514  539238 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-227717
	
	I0926 23:22:14.033546  539238 ubuntu.go:182] provisioning hostname "bridge-227717"
	I0926 23:22:14.033615  539238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-227717
	I0926 23:22:14.054267  539238 main.go:141] libmachine: Using SSH client type: native
	I0926 23:22:14.054527  539238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0926 23:22:14.054544  539238 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-227717 && echo "bridge-227717" | sudo tee /etc/hostname
	I0926 23:22:14.206907  539238 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-227717
	
	I0926 23:22:14.207004  539238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-227717
	I0926 23:22:14.227235  539238 main.go:141] libmachine: Using SSH client type: native
	I0926 23:22:14.227550  539238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0926 23:22:14.227580  539238 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-227717' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-227717/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-227717' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0926 23:22:14.365067  539238 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 23:22:14.365109  539238 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21642-208519/.minikube CaCertPath:/home/jenkins/minikube-integration/21642-208519/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21642-208519/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21642-208519/.minikube}
	I0926 23:22:14.365166  539238 ubuntu.go:190] setting up certificates
	I0926 23:22:14.365184  539238 provision.go:84] configureAuth start
	I0926 23:22:14.365237  539238 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-227717
	I0926 23:22:14.383845  539238 provision.go:143] copyHostCerts
	I0926 23:22:14.383915  539238 exec_runner.go:144] found /home/jenkins/minikube-integration/21642-208519/.minikube/key.pem, removing ...
	I0926 23:22:14.383931  539238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21642-208519/.minikube/key.pem
	I0926 23:22:14.384004  539238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-208519/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21642-208519/.minikube/key.pem (1675 bytes)
	I0926 23:22:14.384156  539238 exec_runner.go:144] found /home/jenkins/minikube-integration/21642-208519/.minikube/ca.pem, removing ...
	I0926 23:22:14.384171  539238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21642-208519/.minikube/ca.pem
	I0926 23:22:14.384215  539238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-208519/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21642-208519/.minikube/ca.pem (1078 bytes)
	I0926 23:22:14.384328  539238 exec_runner.go:144] found /home/jenkins/minikube-integration/21642-208519/.minikube/cert.pem, removing ...
	I0926 23:22:14.384341  539238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21642-208519/.minikube/cert.pem
	I0926 23:22:14.384382  539238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21642-208519/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21642-208519/.minikube/cert.pem (1123 bytes)
	I0926 23:22:14.384477  539238 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21642-208519/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21642-208519/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21642-208519/.minikube/certs/ca-key.pem org=jenkins.bridge-227717 san=[127.0.0.1 192.168.76.2 bridge-227717 localhost minikube]
	I0926 23:22:14.555752  539238 provision.go:177] copyRemoteCerts
	I0926 23:22:14.555816  539238 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 23:22:14.555853  539238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-227717
	I0926 23:22:14.574627  539238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/bridge-227717/id_rsa Username:docker}
	I0926 23:22:14.673152  539238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0926 23:22:14.701442  539238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0926 23:22:14.726795  539238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0926 23:22:14.752197  539238 provision.go:87] duration metric: took 386.996004ms to configureAuth
	I0926 23:22:14.752227  539238 ubuntu.go:206] setting minikube options for container-runtime
	I0926 23:22:14.752419  539238 config.go:182] Loaded profile config "bridge-227717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:22:14.752542  539238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-227717
	I0926 23:22:14.770615  539238 main.go:141] libmachine: Using SSH client type: native
	I0926 23:22:14.770891  539238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0926 23:22:14.770915  539238 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0926 23:22:15.020349  539238 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0926 23:22:15.020380  539238 machine.go:96] duration metric: took 1.152796056s to provisionDockerMachine
	I0926 23:22:15.020396  539238 client.go:171] duration metric: took 6.550890038s to LocalClient.Create
	I0926 23:22:15.020418  539238 start.go:167] duration metric: took 6.550944995s to libmachine.API.Create "bridge-227717"
	I0926 23:22:15.020427  539238 start.go:293] postStartSetup for "bridge-227717" (driver="docker")
	I0926 23:22:15.020442  539238 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 23:22:15.020513  539238 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 23:22:15.020558  539238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-227717
	I0926 23:22:15.038720  539238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/bridge-227717/id_rsa Username:docker}
	I0926 23:22:15.139176  539238 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 23:22:15.142726  539238 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0926 23:22:15.142764  539238 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0926 23:22:15.142777  539238 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0926 23:22:15.142786  539238 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0926 23:22:15.142798  539238 filesync.go:126] Scanning /home/jenkins/minikube-integration/21642-208519/.minikube/addons for local assets ...
	I0926 23:22:15.142856  539238 filesync.go:126] Scanning /home/jenkins/minikube-integration/21642-208519/.minikube/files for local assets ...
	I0926 23:22:15.142958  539238 filesync.go:149] local asset: /home/jenkins/minikube-integration/21642-208519/.minikube/files/etc/ssl/certs/2121372.pem -> 2121372.pem in /etc/ssl/certs
	I0926 23:22:15.143056  539238 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0926 23:22:15.152523  539238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/files/etc/ssl/certs/2121372.pem --> /etc/ssl/certs/2121372.pem (1708 bytes)
	I0926 23:22:15.181057  539238 start.go:296] duration metric: took 160.602622ms for postStartSetup
	I0926 23:22:15.181419  539238 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-227717
	I0926 23:22:15.200373  539238 profile.go:143] Saving config to /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/config.json ...
	I0926 23:22:15.200595  539238 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0926 23:22:15.200647  539238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-227717
	I0926 23:22:15.221129  539238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/bridge-227717/id_rsa Username:docker}
	I0926 23:22:15.319393  539238 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0926 23:22:15.324724  539238 start.go:128] duration metric: took 6.857145072s to createHost
	I0926 23:22:15.324751  539238 start.go:83] releasing machines lock for "bridge-227717", held for 6.857318622s
	I0926 23:22:15.324833  539238 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-227717
	I0926 23:22:15.344474  539238 ssh_runner.go:195] Run: cat /version.json
	I0926 23:22:15.344523  539238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-227717
	I0926 23:22:15.344587  539238 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0926 23:22:15.344658  539238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-227717
	I0926 23:22:15.364232  539238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/bridge-227717/id_rsa Username:docker}
	I0926 23:22:15.364724  539238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/bridge-227717/id_rsa Username:docker}
	I0926 23:22:15.456440  539238 ssh_runner.go:195] Run: systemctl --version
	I0926 23:22:15.530549  539238 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0926 23:22:15.674857  539238 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0926 23:22:15.679887  539238 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 23:22:15.703285  539238 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0926 23:22:15.703355  539238 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 23:22:15.732546  539238 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0926 23:22:15.732568  539238 start.go:495] detecting cgroup driver to use...
	I0926 23:22:15.732598  539238 detect.go:190] detected "systemd" cgroup driver on host os
	I0926 23:22:15.732641  539238 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0926 23:22:15.748391  539238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 23:22:15.760275  539238 docker.go:218] disabling cri-docker service (if available) ...
	I0926 23:22:15.760346  539238 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0926 23:22:15.774861  539238 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0926 23:22:15.790472  539238 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0926 23:22:15.863052  539238 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0926 23:22:15.936335  539238 docker.go:234] disabling docker service ...
	I0926 23:22:15.936392  539238 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0926 23:22:15.955556  539238 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0926 23:22:15.968730  539238 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0926 23:22:16.035211  539238 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0926 23:22:16.209853  539238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0926 23:22:16.222129  539238 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 23:22:16.240570  539238 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0926 23:22:16.240639  539238 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:22:16.253932  539238 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0926 23:22:16.254025  539238 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:22:16.265787  539238 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:22:16.279948  539238 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:22:16.292673  539238 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 23:22:16.303007  539238 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:22:16.313783  539238 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:22:16.331734  539238 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0926 23:22:16.342483  539238 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 23:22:16.351439  539238 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 23:22:16.360720  539238 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 23:22:16.428173  539238 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0926 23:22:16.524622  539238 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0926 23:22:16.524681  539238 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0926 23:22:16.528749  539238 start.go:563] Will wait 60s for crictl version
	I0926 23:22:16.528810  539238 ssh_runner.go:195] Run: which crictl
	I0926 23:22:16.532388  539238 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0926 23:22:16.568641  539238 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0926 23:22:16.568735  539238 ssh_runner.go:195] Run: crio --version
	I0926 23:22:16.606806  539238 ssh_runner.go:195] Run: crio --version
	I0926 23:22:16.644351  539238 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	W0926 23:22:13.565297  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:16.068512  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	I0926 23:22:16.645429  539238 cli_runner.go:164] Run: docker network inspect bridge-227717 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0926 23:22:16.662944  539238 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0926 23:22:16.667413  539238 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 23:22:16.679295  539238 kubeadm.go:883] updating cluster {Name:bridge-227717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:bridge-227717 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0926 23:22:16.679415  539238 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0926 23:22:16.679466  539238 ssh_runner.go:195] Run: sudo crictl images --output json
	I0926 23:22:16.752288  539238 crio.go:514] all images are preloaded for cri-o runtime.
	I0926 23:22:16.752310  539238 crio.go:433] Images already preloaded, skipping extraction
	I0926 23:22:16.752368  539238 ssh_runner.go:195] Run: sudo crictl images --output json
	I0926 23:22:16.788381  539238 crio.go:514] all images are preloaded for cri-o runtime.
	I0926 23:22:16.788407  539238 cache_images.go:85] Images are preloaded, skipping loading
	I0926 23:22:16.788420  539238 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.0 crio true true} ...
	I0926 23:22:16.788527  539238 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=bridge-227717 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:bridge-227717 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0926 23:22:16.788603  539238 ssh_runner.go:195] Run: crio config
	I0926 23:22:16.832420  539238 cni.go:84] Creating CNI manager for "bridge"
	I0926 23:22:16.832452  539238 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0926 23:22:16.832473  539238 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-227717 NodeName:bridge-227717 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0926 23:22:16.832611  539238 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-227717"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0926 23:22:16.832671  539238 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0926 23:22:16.842608  539238 binaries.go:44] Found k8s binaries, skipping transfer
	I0926 23:22:16.842676  539238 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0926 23:22:16.852376  539238 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0926 23:22:16.872244  539238 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0926 23:22:16.894296  539238 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0926 23:22:16.914270  539238 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0926 23:22:16.918747  539238 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 23:22:16.930823  539238 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 23:22:16.995253  539238 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 23:22:17.022712  539238 certs.go:69] Setting up /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717 for IP: 192.168.76.2
	I0926 23:22:17.022736  539238 certs.go:195] generating shared ca certs ...
	I0926 23:22:17.022767  539238 certs.go:227] acquiring lock for ca certs: {Name:mk7fa2bdff33a744d301294affc1d74bea26e4b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:22:17.022928  539238 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21642-208519/.minikube/ca.key
	I0926 23:22:17.022979  539238 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21642-208519/.minikube/proxy-client-ca.key
	I0926 23:22:17.022992  539238 certs.go:257] generating profile certs ...
	I0926 23:22:17.023065  539238 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/client.key
	I0926 23:22:17.023094  539238 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/client.crt with IP's: []
	I0926 23:22:17.257181  539238 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/client.crt ...
	I0926 23:22:17.257211  539238 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/client.crt: {Name:mk12ab5b701ec110fb8601a9bc3d04dbaa831776 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:22:17.257430  539238 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/client.key ...
	I0926 23:22:17.257446  539238 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/client.key: {Name:mka3f19ba6abd5a9770583f4d38a136a49d6e03f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:22:17.257565  539238 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/apiserver.key.f0da0de2
	I0926 23:22:17.257589  539238 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/apiserver.crt.f0da0de2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0926 23:22:17.460005  539238 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/apiserver.crt.f0da0de2 ...
	I0926 23:22:17.460034  539238 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/apiserver.crt.f0da0de2: {Name:mkadaa82cedee7fb0a867007c7de1d4d52a6f9f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:22:17.460246  539238 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/apiserver.key.f0da0de2 ...
	I0926 23:22:17.460265  539238 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/apiserver.key.f0da0de2: {Name:mk8641615c464c39cf0cbf1ceef0f2f47c5b6794 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:22:17.460374  539238 certs.go:382] copying /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/apiserver.crt.f0da0de2 -> /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/apiserver.crt
	I0926 23:22:17.460478  539238 certs.go:386] copying /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/apiserver.key.f0da0de2 -> /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/apiserver.key
	I0926 23:22:17.460561  539238 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/proxy-client.key
	I0926 23:22:17.460581  539238 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/proxy-client.crt with IP's: []
	I0926 23:22:17.579117  539238 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/proxy-client.crt ...
	I0926 23:22:17.579146  539238 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/proxy-client.crt: {Name:mk10d8d68dbb6b61b5b15fc73c8649e99c3edba3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:22:17.579316  539238 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/proxy-client.key ...
	I0926 23:22:17.579329  539238 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/proxy-client.key: {Name:mk67bae5a996719861df916e29855b00ad52ef70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:22:17.579503  539238 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-208519/.minikube/certs/212137.pem (1338 bytes)
	W0926 23:22:17.579560  539238 certs.go:480] ignoring /home/jenkins/minikube-integration/21642-208519/.minikube/certs/212137_empty.pem, impossibly tiny 0 bytes
	I0926 23:22:17.579575  539238 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-208519/.minikube/certs/ca-key.pem (1679 bytes)
	I0926 23:22:17.579601  539238 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-208519/.minikube/certs/ca.pem (1078 bytes)
	I0926 23:22:17.579626  539238 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-208519/.minikube/certs/cert.pem (1123 bytes)
	I0926 23:22:17.579660  539238 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-208519/.minikube/certs/key.pem (1675 bytes)
	I0926 23:22:17.579715  539238 certs.go:484] found cert: /home/jenkins/minikube-integration/21642-208519/.minikube/files/etc/ssl/certs/2121372.pem (1708 bytes)
	I0926 23:22:17.580345  539238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0926 23:22:17.609625  539238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0926 23:22:17.636704  539238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0926 23:22:17.662622  539238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0926 23:22:17.689427  539238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0926 23:22:17.715876  539238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0926 23:22:17.743213  539238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0926 23:22:17.770497  539238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/bridge-227717/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0926 23:22:17.796873  539238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/files/etc/ssl/certs/2121372.pem --> /usr/share/ca-certificates/2121372.pem (1708 bytes)
	I0926 23:22:17.826015  539238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0926 23:22:17.851210  539238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21642-208519/.minikube/certs/212137.pem --> /usr/share/ca-certificates/212137.pem (1338 bytes)
	I0926 23:22:17.875975  539238 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0926 23:22:17.894617  539238 ssh_runner.go:195] Run: openssl version
	I0926 23:22:17.901080  539238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2121372.pem && ln -fs /usr/share/ca-certificates/2121372.pem /etc/ssl/certs/2121372.pem"
	I0926 23:22:17.911627  539238 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2121372.pem
	I0926 23:22:17.915581  539238 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 26 22:36 /usr/share/ca-certificates/2121372.pem
	I0926 23:22:17.915645  539238 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2121372.pem
	I0926 23:22:17.923209  539238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2121372.pem /etc/ssl/certs/3ec20f2e.0"
	I0926 23:22:17.933073  539238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0926 23:22:17.943011  539238 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0926 23:22:17.947100  539238 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 26 22:29 /usr/share/ca-certificates/minikubeCA.pem
	I0926 23:22:17.947161  539238 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0926 23:22:17.954654  539238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0926 23:22:17.964786  539238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/212137.pem && ln -fs /usr/share/ca-certificates/212137.pem /etc/ssl/certs/212137.pem"
	I0926 23:22:17.976321  539238 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/212137.pem
	I0926 23:22:17.980796  539238 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 26 22:36 /usr/share/ca-certificates/212137.pem
	I0926 23:22:17.980870  539238 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/212137.pem
	I0926 23:22:17.988146  539238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/212137.pem /etc/ssl/certs/51391683.0"
	I0926 23:22:17.998575  539238 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0926 23:22:18.002077  539238 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0926 23:22:18.002161  539238 kubeadm.go:400] StartCluster: {Name:bridge-227717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:bridge-227717 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Sock
etVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 23:22:18.002245  539238 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0926 23:22:18.002309  539238 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0926 23:22:18.039056  539238 cri.go:89] found id: ""
	I0926 23:22:18.039141  539238 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0926 23:22:18.048717  539238 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0926 23:22:18.058247  539238 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0926 23:22:18.058305  539238 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0926 23:22:18.068379  539238 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0926 23:22:18.068400  539238 kubeadm.go:157] found existing configuration files:
	
	I0926 23:22:18.068443  539238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0926 23:22:18.077807  539238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0926 23:22:18.077878  539238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0926 23:22:18.087391  539238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0926 23:22:18.096651  539238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0926 23:22:18.096708  539238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0926 23:22:18.106651  539238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0926 23:22:18.116406  539238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0926 23:22:18.116495  539238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0926 23:22:18.125668  539238 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0926 23:22:18.134938  539238 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0926 23:22:18.135004  539238 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0926 23:22:18.143877  539238 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0926 23:22:18.200556  539238 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1040-gcp\n", err: exit status 1
	I0926 23:22:18.259571  539238 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0926 23:22:16.038519  531066 system_pods.go:86] 7 kube-system pods found
	I0926 23:22:16.038550  531066 system_pods.go:89] "coredns-66bc5c9577-72ld9" [85e13695-188f-4f44-a5c4-6bce9f99c7e8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:22:16.038557  531066 system_pods.go:89] "etcd-flannel-227717" [3e96285c-a07e-43f2-8830-6eb0a1cee44b] Running
	I0926 23:22:16.038564  531066 system_pods.go:89] "kube-apiserver-flannel-227717" [f8238904-4a3d-4074-aade-0c7897ff766f] Running
	I0926 23:22:16.038568  531066 system_pods.go:89] "kube-controller-manager-flannel-227717" [30e2c364-b847-48d4-a3c4-205af85cadbd] Running
	I0926 23:22:16.038572  531066 system_pods.go:89] "kube-proxy-94chj" [daf7349b-f564-44d5-a975-71afc5121328] Running
	I0926 23:22:16.038576  531066 system_pods.go:89] "kube-scheduler-flannel-227717" [acd16790-6d30-4d2c-9ac1-2213b7e4bbf0] Running
	I0926 23:22:16.038579  531066 system_pods.go:89] "storage-provisioner" [d6a5cbce-7387-48e7-9cae-7f1e0f1f7ea3] Running
	I0926 23:22:16.038594  531066 retry.go:31] will retry after 4.392682378s: missing components: kube-dns
	I0926 23:22:20.436534  531066 system_pods.go:86] 7 kube-system pods found
	I0926 23:22:20.436571  531066 system_pods.go:89] "coredns-66bc5c9577-72ld9" [85e13695-188f-4f44-a5c4-6bce9f99c7e8] Running
	I0926 23:22:20.436580  531066 system_pods.go:89] "etcd-flannel-227717" [3e96285c-a07e-43f2-8830-6eb0a1cee44b] Running
	I0926 23:22:20.436586  531066 system_pods.go:89] "kube-apiserver-flannel-227717" [f8238904-4a3d-4074-aade-0c7897ff766f] Running
	I0926 23:22:20.436591  531066 system_pods.go:89] "kube-controller-manager-flannel-227717" [30e2c364-b847-48d4-a3c4-205af85cadbd] Running
	I0926 23:22:20.436596  531066 system_pods.go:89] "kube-proxy-94chj" [daf7349b-f564-44d5-a975-71afc5121328] Running
	I0926 23:22:20.436602  531066 system_pods.go:89] "kube-scheduler-flannel-227717" [acd16790-6d30-4d2c-9ac1-2213b7e4bbf0] Running
	I0926 23:22:20.436606  531066 system_pods.go:89] "storage-provisioner" [d6a5cbce-7387-48e7-9cae-7f1e0f1f7ea3] Running
	I0926 23:22:20.436618  531066 system_pods.go:126] duration metric: took 17.107747385s to wait for k8s-apps to be running ...
	I0926 23:22:20.436633  531066 system_svc.go:44] waiting for kubelet service to be running ....
	I0926 23:22:20.436690  531066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 23:22:20.449823  531066 system_svc.go:56] duration metric: took 13.177203ms WaitForService to wait for kubelet
	I0926 23:22:20.449863  531066 kubeadm.go:586] duration metric: took 20.96652036s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 23:22:20.449889  531066 node_conditions.go:102] verifying NodePressure condition ...
	I0926 23:22:20.452916  531066 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0926 23:22:20.452942  531066 node_conditions.go:123] node cpu capacity is 8
	I0926 23:22:20.452959  531066 node_conditions.go:105] duration metric: took 3.064689ms to run NodePressure ...
	I0926 23:22:20.452974  531066 start.go:241] waiting for startup goroutines ...
	I0926 23:22:20.452983  531066 start.go:246] waiting for cluster config update ...
	I0926 23:22:20.453000  531066 start.go:255] writing updated cluster config ...
	I0926 23:22:20.453333  531066 ssh_runner.go:195] Run: rm -f paused
	I0926 23:22:20.457376  531066 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0926 23:22:20.460804  531066 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-72ld9" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:20.465281  531066 pod_ready.go:94] pod "coredns-66bc5c9577-72ld9" is "Ready"
	I0926 23:22:20.465301  531066 pod_ready.go:86] duration metric: took 4.478615ms for pod "coredns-66bc5c9577-72ld9" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:20.467228  531066 pod_ready.go:83] waiting for pod "etcd-flannel-227717" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:20.470846  531066 pod_ready.go:94] pod "etcd-flannel-227717" is "Ready"
	I0926 23:22:20.470863  531066 pod_ready.go:86] duration metric: took 3.614994ms for pod "etcd-flannel-227717" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:20.474997  531066 pod_ready.go:83] waiting for pod "kube-apiserver-flannel-227717" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:20.478810  531066 pod_ready.go:94] pod "kube-apiserver-flannel-227717" is "Ready"
	I0926 23:22:20.478834  531066 pod_ready.go:86] duration metric: took 3.815303ms for pod "kube-apiserver-flannel-227717" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:20.480773  531066 pod_ready.go:83] waiting for pod "kube-controller-manager-flannel-227717" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:20.862171  531066 pod_ready.go:94] pod "kube-controller-manager-flannel-227717" is "Ready"
	I0926 23:22:20.862198  531066 pod_ready.go:86] duration metric: took 381.405612ms for pod "kube-controller-manager-flannel-227717" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:21.061277  531066 pod_ready.go:83] waiting for pod "kube-proxy-94chj" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:21.461664  531066 pod_ready.go:94] pod "kube-proxy-94chj" is "Ready"
	I0926 23:22:21.461693  531066 pod_ready.go:86] duration metric: took 400.390129ms for pod "kube-proxy-94chj" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:21.662293  531066 pod_ready.go:83] waiting for pod "kube-scheduler-flannel-227717" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:22.061295  531066 pod_ready.go:94] pod "kube-scheduler-flannel-227717" is "Ready"
	I0926 23:22:22.061322  531066 pod_ready.go:86] duration metric: took 399.003596ms for pod "kube-scheduler-flannel-227717" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:22.061333  531066 pod_ready.go:40] duration metric: took 1.603920934s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0926 23:22:22.107179  531066 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0926 23:22:22.109596  531066 out.go:179] * Done! kubectl is now configured to use "flannel-227717" cluster and "default" namespace by default
	W0926 23:22:18.565528  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:21.064422  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	I0926 23:22:27.152347  539238 kubeadm.go:318] [init] Using Kubernetes version: v1.34.0
	I0926 23:22:27.152424  539238 kubeadm.go:318] [preflight] Running pre-flight checks
	I0926 23:22:27.152531  539238 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I0926 23:22:27.152592  539238 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1040-gcp
	I0926 23:22:27.152626  539238 kubeadm.go:318] OS: Linux
	I0926 23:22:27.152666  539238 kubeadm.go:318] CGROUPS_CPU: enabled
	I0926 23:22:27.152735  539238 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I0926 23:22:27.152791  539238 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I0926 23:22:27.152838  539238 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I0926 23:22:27.152879  539238 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I0926 23:22:27.152927  539238 kubeadm.go:318] CGROUPS_PIDS: enabled
	I0926 23:22:27.152968  539238 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I0926 23:22:27.153015  539238 kubeadm.go:318] CGROUPS_IO: enabled
	I0926 23:22:27.153081  539238 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0926 23:22:27.153218  539238 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0926 23:22:27.153311  539238 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0926 23:22:27.153371  539238 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0926 23:22:27.156159  539238 out.go:252]   - Generating certificates and keys ...
	I0926 23:22:27.156233  539238 kubeadm.go:318] [certs] Using existing ca certificate authority
	I0926 23:22:27.156297  539238 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I0926 23:22:27.156358  539238 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0926 23:22:27.156422  539238 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I0926 23:22:27.156507  539238 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I0926 23:22:27.156557  539238 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I0926 23:22:27.156604  539238 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I0926 23:22:27.156733  539238 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [bridge-227717 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0926 23:22:27.156821  539238 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I0926 23:22:27.156943  539238 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [bridge-227717 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0926 23:22:27.157030  539238 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0926 23:22:27.157134  539238 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I0926 23:22:27.157202  539238 kubeadm.go:318] [certs] Generating "sa" key and public key
	I0926 23:22:27.157268  539238 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0926 23:22:27.157315  539238 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0926 23:22:27.157395  539238 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0926 23:22:27.157459  539238 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0926 23:22:27.157535  539238 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0926 23:22:27.157620  539238 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0926 23:22:27.157738  539238 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0926 23:22:27.157847  539238 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0926 23:22:27.159212  539238 out.go:252]   - Booting up control plane ...
	I0926 23:22:27.159292  539238 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0926 23:22:27.159395  539238 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0926 23:22:27.159502  539238 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0926 23:22:27.159618  539238 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0926 23:22:27.159713  539238 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0926 23:22:27.159830  539238 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0926 23:22:27.159912  539238 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0926 23:22:27.159971  539238 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I0926 23:22:27.160158  539238 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0926 23:22:27.160284  539238 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0926 23:22:27.160381  539238 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000861015s
	I0926 23:22:27.160528  539238 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0926 23:22:27.160668  539238 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I0926 23:22:27.160782  539238 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0926 23:22:27.160888  539238 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0926 23:22:27.160976  539238 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 919.813595ms
	I0926 23:22:27.161069  539238 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.894189857s
	I0926 23:22:27.161197  539238 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.501208699s
	I0926 23:22:27.161362  539238 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0926 23:22:27.161530  539238 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0926 23:22:27.161590  539238 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I0926 23:22:27.161806  539238 kubeadm.go:318] [mark-control-plane] Marking the node bridge-227717 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0926 23:22:27.161882  539238 kubeadm.go:318] [bootstrap-token] Using token: r5cn1d.nfbdwo5sx0g5pe6j
	I0926 23:22:27.163314  539238 out.go:252]   - Configuring RBAC rules ...
	I0926 23:22:27.163437  539238 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0926 23:22:27.163543  539238 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0926 23:22:27.163671  539238 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0926 23:22:27.163817  539238 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0926 23:22:27.163915  539238 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0926 23:22:27.163998  539238 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0926 23:22:27.164139  539238 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0926 23:22:27.164197  539238 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I0926 23:22:27.164236  539238 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I0926 23:22:27.164245  539238 kubeadm.go:318] 
	I0926 23:22:27.164309  539238 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I0926 23:22:27.164321  539238 kubeadm.go:318] 
	I0926 23:22:27.164430  539238 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I0926 23:22:27.164442  539238 kubeadm.go:318] 
	I0926 23:22:27.164475  539238 kubeadm.go:318]   mkdir -p $HOME/.kube
	I0926 23:22:27.164565  539238 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0926 23:22:27.164637  539238 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0926 23:22:27.164646  539238 kubeadm.go:318] 
	I0926 23:22:27.164727  539238 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I0926 23:22:27.164737  539238 kubeadm.go:318] 
	I0926 23:22:27.164812  539238 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0926 23:22:27.164822  539238 kubeadm.go:318] 
	I0926 23:22:27.164897  539238 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I0926 23:22:27.164984  539238 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0926 23:22:27.165046  539238 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0926 23:22:27.165053  539238 kubeadm.go:318] 
	I0926 23:22:27.165155  539238 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I0926 23:22:27.165233  539238 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I0926 23:22:27.165240  539238 kubeadm.go:318] 
	I0926 23:22:27.165304  539238 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token r5cn1d.nfbdwo5sx0g5pe6j \
	I0926 23:22:27.165404  539238 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3d6fdc47b9a05c23d4386afe181df704d515c8f0f79bc04c09ac3ae58668e55b \
	I0926 23:22:27.165431  539238 kubeadm.go:318] 	--control-plane 
	I0926 23:22:27.165441  539238 kubeadm.go:318] 
	I0926 23:22:27.165517  539238 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I0926 23:22:27.165523  539238 kubeadm.go:318] 
	I0926 23:22:27.165596  539238 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token r5cn1d.nfbdwo5sx0g5pe6j \
	I0926 23:22:27.165714  539238 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3d6fdc47b9a05c23d4386afe181df704d515c8f0f79bc04c09ac3ae58668e55b 
	I0926 23:22:27.165729  539238 cni.go:84] Creating CNI manager for "bridge"
	I0926 23:22:27.167835  539238 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	W0926 23:22:23.065631  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:25.565212  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	I0926 23:22:27.169116  539238 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0926 23:22:27.179575  539238 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0926 23:22:27.200877  539238 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0926 23:22:27.200951  539238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:22:27.200981  539238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-227717 minikube.k8s.io/updated_at=2025_09_26T23_22_27_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47 minikube.k8s.io/name=bridge-227717 minikube.k8s.io/primary=true
	I0926 23:22:27.277181  539238 ops.go:34] apiserver oom_adj: -16
	I0926 23:22:27.277240  539238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:22:27.777903  539238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:22:28.277680  539238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:22:28.778136  539238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:22:29.277671  539238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:22:29.777905  539238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:22:30.278283  539238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:22:30.778314  539238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:22:31.277691  539238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:22:31.777527  539238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:22:32.278280  539238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 23:22:32.365300  539238 kubeadm.go:1113] duration metric: took 5.164412944s to wait for elevateKubeSystemPrivileges
	I0926 23:22:32.365343  539238 kubeadm.go:402] duration metric: took 14.36318598s to StartCluster
	I0926 23:22:32.365366  539238 settings.go:142] acquiring lock: {Name:mk916931486ea7be0f55a69a0dcc9388c8f91bcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:22:32.365454  539238 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21642-208519/kubeconfig
	I0926 23:22:32.366919  539238 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21642-208519/kubeconfig: {Name:mk573e8783a83da2d326620e120d75cc729311d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 23:22:32.367231  539238 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0926 23:22:32.367244  539238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0926 23:22:32.367320  539238 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0926 23:22:32.367411  539238 addons.go:69] Setting storage-provisioner=true in profile "bridge-227717"
	I0926 23:22:32.367437  539238 addons.go:238] Setting addon storage-provisioner=true in "bridge-227717"
	I0926 23:22:32.367445  539238 config.go:182] Loaded profile config "bridge-227717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:22:32.367475  539238 host.go:66] Checking if "bridge-227717" exists ...
	I0926 23:22:32.367462  539238 addons.go:69] Setting default-storageclass=true in profile "bridge-227717"
	I0926 23:22:32.367524  539238 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-227717"
	I0926 23:22:32.367895  539238 cli_runner.go:164] Run: docker container inspect bridge-227717 --format={{.State.Status}}
	I0926 23:22:32.368081  539238 cli_runner.go:164] Run: docker container inspect bridge-227717 --format={{.State.Status}}
	I0926 23:22:32.372588  539238 out.go:179] * Verifying Kubernetes components...
	I0926 23:22:32.374180  539238 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 23:22:32.391326  539238 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0926 23:22:28.065074  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:30.065808  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	I0926 23:22:32.392164  539238 addons.go:238] Setting addon default-storageclass=true in "bridge-227717"
	I0926 23:22:32.392215  539238 host.go:66] Checking if "bridge-227717" exists ...
	I0926 23:22:32.392651  539238 cli_runner.go:164] Run: docker container inspect bridge-227717 --format={{.State.Status}}
	I0926 23:22:32.392890  539238 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 23:22:32.392919  539238 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0926 23:22:32.392974  539238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-227717
	I0926 23:22:32.418989  539238 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0926 23:22:32.419012  539238 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0926 23:22:32.419219  539238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-227717
	I0926 23:22:32.422169  539238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/bridge-227717/id_rsa Username:docker}
	I0926 23:22:32.449248  539238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/bridge-227717/id_rsa Username:docker}
	I0926 23:22:32.466393  539238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0926 23:22:32.513826  539238 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 23:22:32.543654  539238 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 23:22:32.568106  539238 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0926 23:22:32.673867  539238 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0926 23:22:32.675402  539238 node_ready.go:35] waiting up to 15m0s for node "bridge-227717" to be "Ready" ...
	I0926 23:22:32.688235  539238 node_ready.go:49] node "bridge-227717" is "Ready"
	I0926 23:22:32.688273  539238 node_ready.go:38] duration metric: took 12.8394ms for node "bridge-227717" to be "Ready" ...
	I0926 23:22:32.688293  539238 api_server.go:52] waiting for apiserver process to appear ...
	I0926 23:22:32.688346  539238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 23:22:32.915472  539238 api_server.go:72] duration metric: took 548.201756ms to wait for apiserver process to appear ...
	I0926 23:22:32.915499  539238 api_server.go:88] waiting for apiserver healthz status ...
	I0926 23:22:32.915521  539238 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0926 23:22:32.921705  539238 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0926 23:22:32.922848  539238 api_server.go:141] control plane version: v1.34.0
	I0926 23:22:32.922949  539238 api_server.go:131] duration metric: took 7.442211ms to wait for apiserver health ...
	I0926 23:22:32.922958  539238 system_pods.go:43] waiting for kube-system pods to appear ...
	I0926 23:22:32.924727  539238 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0926 23:22:32.926164  539238 addons.go:514] duration metric: took 558.848041ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0926 23:22:32.926599  539238 system_pods.go:59] 8 kube-system pods found
	I0926 23:22:32.926632  539238 system_pods.go:61] "coredns-66bc5c9577-49j55" [a4bd29c6-9d53-4d6a-9f15-bc7e8f89a0d6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:22:32.926642  539238 system_pods.go:61] "coredns-66bc5c9577-f2bz7" [e36e4f7b-8b76-4fc9-bda4-e2aa91f8434e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:22:32.926653  539238 system_pods.go:61] "etcd-bridge-227717" [c90194d2-92ea-4118-bb34-59a0735e245f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:22:32.926665  539238 system_pods.go:61] "kube-apiserver-bridge-227717" [82eddf9e-11c5-4cf4-860d-e74122443d94] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 23:22:32.926679  539238 system_pods.go:61] "kube-controller-manager-bridge-227717" [f55d8abf-5fb0-4cfc-806e-d6dfde578806] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 23:22:32.926689  539238 system_pods.go:61] "kube-proxy-47cgp" [b6a43704-cb57-4490-bfce-aeda1c269a71] Running
	I0926 23:22:32.926697  539238 system_pods.go:61] "kube-scheduler-bridge-227717" [c9b4fe5f-64e3-4cbb-b2f5-55d1dce2afae] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 23:22:32.926706  539238 system_pods.go:61] "storage-provisioner" [c5e9f540-b456-4ade-ad9f-32f2a5282c3a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0926 23:22:32.926717  539238 system_pods.go:74] duration metric: took 3.751675ms to wait for pod list to return data ...
	I0926 23:22:32.926731  539238 default_sa.go:34] waiting for default service account to be created ...
	I0926 23:22:32.929026  539238 default_sa.go:45] found service account: "default"
	I0926 23:22:32.929048  539238 default_sa.go:55] duration metric: took 2.308615ms for default service account to be created ...
	I0926 23:22:32.929058  539238 system_pods.go:116] waiting for k8s-apps to be running ...
	I0926 23:22:32.931675  539238 system_pods.go:86] 8 kube-system pods found
	I0926 23:22:32.931719  539238 system_pods.go:89] "coredns-66bc5c9577-49j55" [a4bd29c6-9d53-4d6a-9f15-bc7e8f89a0d6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:22:32.931733  539238 system_pods.go:89] "coredns-66bc5c9577-f2bz7" [e36e4f7b-8b76-4fc9-bda4-e2aa91f8434e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:22:32.931739  539238 system_pods.go:89] "etcd-bridge-227717" [c90194d2-92ea-4118-bb34-59a0735e245f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:22:32.931744  539238 system_pods.go:89] "kube-apiserver-bridge-227717" [82eddf9e-11c5-4cf4-860d-e74122443d94] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 23:22:32.931755  539238 system_pods.go:89] "kube-controller-manager-bridge-227717" [f55d8abf-5fb0-4cfc-806e-d6dfde578806] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 23:22:32.931761  539238 system_pods.go:89] "kube-proxy-47cgp" [b6a43704-cb57-4490-bfce-aeda1c269a71] Running
	I0926 23:22:32.931766  539238 system_pods.go:89] "kube-scheduler-bridge-227717" [c9b4fe5f-64e3-4cbb-b2f5-55d1dce2afae] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 23:22:32.931773  539238 system_pods.go:89] "storage-provisioner" [c5e9f540-b456-4ade-ad9f-32f2a5282c3a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0926 23:22:32.931792  539238 retry.go:31] will retry after 226.559612ms: missing components: kube-dns
	I0926 23:22:33.162971  539238 system_pods.go:86] 8 kube-system pods found
	I0926 23:22:33.163010  539238 system_pods.go:89] "coredns-66bc5c9577-49j55" [a4bd29c6-9d53-4d6a-9f15-bc7e8f89a0d6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:22:33.163026  539238 system_pods.go:89] "coredns-66bc5c9577-f2bz7" [e36e4f7b-8b76-4fc9-bda4-e2aa91f8434e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:22:33.163035  539238 system_pods.go:89] "etcd-bridge-227717" [c90194d2-92ea-4118-bb34-59a0735e245f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:22:33.163043  539238 system_pods.go:89] "kube-apiserver-bridge-227717" [82eddf9e-11c5-4cf4-860d-e74122443d94] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 23:22:33.163051  539238 system_pods.go:89] "kube-controller-manager-bridge-227717" [f55d8abf-5fb0-4cfc-806e-d6dfde578806] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 23:22:33.163065  539238 system_pods.go:89] "kube-proxy-47cgp" [b6a43704-cb57-4490-bfce-aeda1c269a71] Running
	I0926 23:22:33.163075  539238 system_pods.go:89] "kube-scheduler-bridge-227717" [c9b4fe5f-64e3-4cbb-b2f5-55d1dce2afae] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 23:22:33.163111  539238 system_pods.go:89] "storage-provisioner" [c5e9f540-b456-4ade-ad9f-32f2a5282c3a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0926 23:22:33.163138  539238 retry.go:31] will retry after 350.388001ms: missing components: kube-dns
	I0926 23:22:33.178947  539238 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-227717" context rescaled to 1 replicas
	W0926 23:22:32.565611  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:34.565720  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:37.065401  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	I0926 23:22:33.518595  539238 system_pods.go:86] 8 kube-system pods found
	I0926 23:22:33.518629  539238 system_pods.go:89] "coredns-66bc5c9577-49j55" [a4bd29c6-9d53-4d6a-9f15-bc7e8f89a0d6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:22:33.518638  539238 system_pods.go:89] "coredns-66bc5c9577-f2bz7" [e36e4f7b-8b76-4fc9-bda4-e2aa91f8434e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0926 23:22:33.518651  539238 system_pods.go:89] "etcd-bridge-227717" [c90194d2-92ea-4118-bb34-59a0735e245f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0926 23:22:33.518657  539238 system_pods.go:89] "kube-apiserver-bridge-227717" [82eddf9e-11c5-4cf4-860d-e74122443d94] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0926 23:22:33.518662  539238 system_pods.go:89] "kube-controller-manager-bridge-227717" [f55d8abf-5fb0-4cfc-806e-d6dfde578806] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0926 23:22:33.518666  539238 system_pods.go:89] "kube-proxy-47cgp" [b6a43704-cb57-4490-bfce-aeda1c269a71] Running
	I0926 23:22:33.518672  539238 system_pods.go:89] "kube-scheduler-bridge-227717" [c9b4fe5f-64e3-4cbb-b2f5-55d1dce2afae] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0926 23:22:33.518677  539238 system_pods.go:89] "storage-provisioner" [c5e9f540-b456-4ade-ad9f-32f2a5282c3a] Running
	I0926 23:22:33.518691  539238 system_pods.go:126] duration metric: took 589.625493ms to wait for k8s-apps to be running ...
	I0926 23:22:33.518705  539238 system_svc.go:44] waiting for kubelet service to be running ....
	I0926 23:22:33.518763  539238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 23:22:33.531944  539238 system_svc.go:56] duration metric: took 13.225117ms WaitForService to wait for kubelet
	I0926 23:22:33.531979  539238 kubeadm.go:586] duration metric: took 1.164717159s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 23:22:33.532004  539238 node_conditions.go:102] verifying NodePressure condition ...
	I0926 23:22:33.534919  539238 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0926 23:22:33.534954  539238 node_conditions.go:123] node cpu capacity is 8
	I0926 23:22:33.534971  539238 node_conditions.go:105] duration metric: took 2.956742ms to run NodePressure ...
	I0926 23:22:33.534986  539238 start.go:241] waiting for startup goroutines ...
	I0926 23:22:33.535000  539238 start.go:246] waiting for cluster config update ...
	I0926 23:22:33.535022  539238 start.go:255] writing updated cluster config ...
	I0926 23:22:33.535370  539238 ssh_runner.go:195] Run: rm -f paused
	I0926 23:22:33.539181  539238 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0926 23:22:33.543859  539238 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-49j55" in "kube-system" namespace to be "Ready" or be gone ...
	W0926 23:22:35.549246  539238 pod_ready.go:104] pod "coredns-66bc5c9577-49j55" is not "Ready", error: <nil>
	W0926 23:22:37.549996  539238 pod_ready.go:104] pod "coredns-66bc5c9577-49j55" is not "Ready", error: <nil>
	I0926 23:22:39.546591  539238 pod_ready.go:99] pod "coredns-66bc5c9577-49j55" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-49j55" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-49j55" not found
	I0926 23:22:39.546622  539238 pod_ready.go:86] duration metric: took 6.002734328s for pod "coredns-66bc5c9577-49j55" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:39.546636  539238 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-f2bz7" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:39.551172  539238 pod_ready.go:94] pod "coredns-66bc5c9577-f2bz7" is "Ready"
	I0926 23:22:39.551193  539238 pod_ready.go:86] duration metric: took 4.550574ms for pod "coredns-66bc5c9577-f2bz7" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:39.553144  539238 pod_ready.go:83] waiting for pod "etcd-bridge-227717" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:39.556581  539238 pod_ready.go:94] pod "etcd-bridge-227717" is "Ready"
	I0926 23:22:39.556601  539238 pod_ready.go:86] duration metric: took 3.432504ms for pod "etcd-bridge-227717" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:39.558524  539238 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-227717" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:39.562145  539238 pod_ready.go:94] pod "kube-apiserver-bridge-227717" is "Ready"
	I0926 23:22:39.562167  539238 pod_ready.go:86] duration metric: took 3.627142ms for pod "kube-apiserver-bridge-227717" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:39.564099  539238 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-227717" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:39.947447  539238 pod_ready.go:94] pod "kube-controller-manager-bridge-227717" is "Ready"
	I0926 23:22:39.947483  539238 pod_ready.go:86] duration metric: took 383.36072ms for pod "kube-controller-manager-bridge-227717" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:40.147990  539238 pod_ready.go:83] waiting for pod "kube-proxy-47cgp" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:40.547815  539238 pod_ready.go:94] pod "kube-proxy-47cgp" is "Ready"
	I0926 23:22:40.547842  539238 pod_ready.go:86] duration metric: took 399.826814ms for pod "kube-proxy-47cgp" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:40.748134  539238 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-227717" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:41.147443  539238 pod_ready.go:94] pod "kube-scheduler-bridge-227717" is "Ready"
	I0926 23:22:41.147473  539238 pod_ready.go:86] duration metric: took 399.309771ms for pod "kube-scheduler-bridge-227717" in "kube-system" namespace to be "Ready" or be gone ...
	I0926 23:22:41.147487  539238 pod_ready.go:40] duration metric: took 7.608272943s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0926 23:22:41.193346  539238 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0926 23:22:41.195201  539238 out.go:179] * Done! kubectl is now configured to use "bridge-227717" cluster and "default" namespace by default
	W0926 23:22:39.565057  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:42.064614  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:44.065719  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:46.065899  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:48.565455  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:50.565543  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:53.064625  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:55.564781  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:57.564857  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:22:59.565139  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:01.565709  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:04.064862  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:06.065309  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:08.565165  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:10.565349  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:13.064920  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:15.564882  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:17.565258  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:20.064964  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:22.065467  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:24.564804  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:27.064970  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:29.564974  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:31.565357  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:34.065050  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:36.065589  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:38.565336  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:40.565539  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:43.065314  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:45.565284  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:48.065588  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:50.564735  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:52.565357  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:55.064557  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:57.064911  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:23:59.065244  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:01.565478  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:04.064628  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:06.065321  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:08.065532  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:10.564829  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:13.064754  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:15.065614  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:17.065756  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:19.564841  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:22.065034  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:24.564595  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:26.564894  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:29.064518  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:31.065069  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:33.065637  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:35.564512  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:37.565224  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:40.064891  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:42.564450  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:44.564539  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:46.564928  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:49.064762  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:51.065432  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:53.065715  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:55.564787  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:24:57.565037  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:00.064879  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:02.564463  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:04.564913  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:06.564947  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:09.065340  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:11.564448  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:13.565280  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:16.065054  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:18.065272  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:20.564625  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:22.564765  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:24.564945  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:27.065003  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:29.564592  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:31.564931  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:34.064581  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:36.064650  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:38.064924  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:40.564651  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:42.564922  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:45.064870  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:47.064953  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:49.564789  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:52.064916  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:54.065074  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:56.065254  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:25:58.565465  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:01.064837  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:03.064944  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:05.565321  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:07.565411  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:10.065463  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:12.564627  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:14.565619  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:17.065374  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:19.065839  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:21.565515  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:24.065887  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:26.565198  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:28.565808  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:31.065709  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:33.564549  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:35.564934  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:37.565293  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:39.565396  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:42.064557  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:44.064791  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:46.065519  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:48.565396  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:51.065348  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:53.065428  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:55.564828  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:26:57.564986  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:00.064983  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:02.565254  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:05.065078  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:07.565324  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:10.065527  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:12.564984  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:14.565259  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:17.065067  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:19.065868  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:21.066053  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:23.565113  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:26.065444  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:28.565677  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:31.065416  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:33.564549  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:35.565081  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:38.064917  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:40.064988  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:42.565349  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:45.064763  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:47.065214  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:49.065374  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:51.565478  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:54.064758  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:56.065527  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:27:58.565674  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:01.064995  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:03.564883  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:06.065272  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:08.565058  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:11.065185  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:13.065451  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:15.564942  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:18.064690  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:20.065053  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:22.564741  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:25.064781  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:27.564948  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:30.065326  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:32.565597  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:35.065606  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:37.564554  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:40.064777  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:42.065566  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:44.564533  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:46.564833  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:49.064915  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:51.565366  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:54.064684  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:56.064852  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:28:58.564817  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:01.064862  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:03.065394  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:05.565439  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:07.565568  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:10.064768  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:12.565208  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:14.565245  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:17.064833  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:19.564770  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:21.565054  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:24.065056  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:26.564682  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:28.564896  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:30.565041  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:32.565329  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:35.065524  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:37.564885  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:39.565653  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:42.064831  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:44.564953  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:46.565055  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:49.065043  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:51.065368  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:53.065413  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:55.065561  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:29:57.565605  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:00.065376  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:02.565401  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:05.064645  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:07.065648  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:09.564831  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:11.565459  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:14.065219  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:16.565047  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:19.064619  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:21.064896  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:23.065443  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:25.564852  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:27.565587  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:29.565712  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:32.064713  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:34.065818  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:36.565391  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:39.065549  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:41.564965  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:44.064918  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:46.564658  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:49.064802  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:51.065546  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:53.565564  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:56.065069  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:30:58.565065  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:01.064634  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:03.064961  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:05.565299  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:08.065343  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:10.565219  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:13.065686  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:15.565051  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:17.565529  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:20.065123  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:22.565013  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:25.064994  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:27.065071  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:29.564542  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:31.567774  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:34.065367  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:36.065456  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:38.564753  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:41.064982  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:43.065326  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:45.065594  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:47.564507  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:49.565300  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:52.065201  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:54.564839  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:57.065321  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:31:59.565335  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:01.565492  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:04.064819  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:06.564670  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:09.064980  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:11.564885  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:13.565398  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:16.064357  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:18.064744  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:20.064824  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:22.064963  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:24.564752  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:27.065158  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:29.564900  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:32.064965  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:34.564624  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:36.564810  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:38.565152  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:41.065677  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:43.565104  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:45.565388  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:48.065280  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:50.065503  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:52.564955  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:54.565663  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:57.064851  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:32:59.565136  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:02.065201  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:04.564768  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:06.565023  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:08.565067  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:10.565457  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:12.565584  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:15.065783  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:17.564645  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:20.064724  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:22.065420  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:24.564471  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:26.565394  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:29.065254  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:31.564929  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:33.565426  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:36.065062  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:38.564876  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:41.065201  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:43.065410  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:45.565037  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:47.565081  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:49.565495  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:52.065079  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:54.565393  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:57.064581  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:33:59.065447  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:34:01.565229  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:34:04.065224  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:34:06.565641  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:34:09.064518  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:34:11.064735  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:34:13.567010  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:34:16.064523  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:34:18.564688  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:34:20.565527  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:34:23.065081  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:34:25.564655  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:34:27.565279  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:34:30.064633  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:34:32.064819  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:34:34.564869  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:34:36.565269  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:34:39.065288  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:34:41.564991  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:34:44.065013  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:34:46.065220  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:34:48.565409  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	W0926 23:34:51.064703  502638 node_ready.go:57] node "calico-227717" has "Ready":"False" status (will retry)
	I0926 23:34:53.062819  502638 node_ready.go:38] duration metric: took 15m0.001007117s for node "calico-227717" to be "Ready" ...
	I0926 23:34:53.064890  502638 out.go:203] 
	W0926 23:34:53.066104  502638 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W0926 23:34:53.066123  502638 out.go:285] * 
	W0926 23:34:53.067854  502638 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 23:34:53.068830  502638 out.go:203] 
	
	
	==> CRI-O <==
	Sep 26 23:37:49 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:37:49.366446036Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=458bcc58-7b65-4dce-8e0a-76cbf51c8acf name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:37:50 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:37:50.365915444Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=a596aa0a-6332-4f74-9673-6660b1e7bb7e name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:37:50 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:37:50.366157920Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=a596aa0a-6332-4f74-9673-6660b1e7bb7e name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:38:01 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:38:01.365395969Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=71d3085c-f728-4d12-959f-892978541b5f name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:38:01 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:38:01.365739895Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=71d3085c-f728-4d12-959f-892978541b5f name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:38:05 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:38:05.365485154Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=5f634d61-3333-4a44-9471-018ddde717b7 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:38:05 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:38:05.365769489Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=5f634d61-3333-4a44-9471-018ddde717b7 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:38:13 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:38:13.366140067Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=dc6c6fbb-6e14-4ff0-ac72-10f4e48c7d20 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:38:13 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:38:13.366460788Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=dc6c6fbb-6e14-4ff0-ac72-10f4e48c7d20 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:38:19 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:38:19.366163782Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=bde737b0-e6de-4f57-9fed-e2a4822b8d20 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:38:19 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:38:19.366451988Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=bde737b0-e6de-4f57-9fed-e2a4822b8d20 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:38:24 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:38:24.365340223Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=4258406c-1791-4cda-9f6b-c5c863fd14a3 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:38:24 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:38:24.365622755Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=4258406c-1791-4cda-9f6b-c5c863fd14a3 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:38:32 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:38:32.365917646Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=768a4af8-cc8d-4a12-ae45-facd08498964 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:38:32 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:38:32.366161298Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=768a4af8-cc8d-4a12-ae45-facd08498964 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:38:37 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:38:37.366569849Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=f843e61c-ab0d-4752-aa2c-ee4ae4d7a8a9 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:38:37 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:38:37.366936172Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=f843e61c-ab0d-4752-aa2c-ee4ae4d7a8a9 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:38:44 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:38:44.365403815Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=4bf58b18-52cc-417c-87e3-8fc3af9ab0b4 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:38:44 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:38:44.365653118Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=4bf58b18-52cc-417c-87e3-8fc3af9ab0b4 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:38:51 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:38:51.367233294Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=bed973eb-783c-4a97-8476-97e089f5ebcc name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:38:51 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:38:51.367500272Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=bed973eb-783c-4a97-8476-97e089f5ebcc name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:38:59 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:38:59.365612791Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=16ff5bbc-8345-413d-a7ae-765b65d5bb13 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:38:59 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:38:59.365860052Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=16ff5bbc-8345-413d-a7ae-765b65d5bb13 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:39:04 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:39:04.365950062Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=e731cc92-2096-4b99-bd53-39bf50d6e835 name=/runtime.v1.ImageService/ImageStatus
	Sep 26 23:39:04 default-k8s-diff-port-441435 crio[542]: time="2025-09-26 23:39:04.366229427Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=e731cc92-2096-4b99-bd53-39bf50d6e835 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	00975ddc9e68b       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   2 minutes ago       Exited              dashboard-metrics-scraper   8                   65beccd5d966a       dashboard-metrics-scraper-6ffb444bf9-w7gnq
	9bf11225358a2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago      Running             storage-provisioner         2                   041570f9c31cd       storage-provisioner
	0c3a33e05e709       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   18 minutes ago      Running             coredns                     1                   fdb68aeaed54c       coredns-66bc5c9577-2svp4
	96a7a432e179a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago      Exited              storage-provisioner         1                   041570f9c31cd       storage-provisioner
	1ea8a4730040a       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c   18 minutes ago      Running             busybox                     1                   3b9acece4505e       busybox
	1fa78b4bbc3cc       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   18 minutes ago      Running             kube-proxy                  1                   370c72854926f       kube-proxy-9nbwg
	fd86762b4522b       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   18 minutes ago      Running             kindnet-cni                 1                   7795ea74236fa       kindnet-qm5t5
	e0edf08c6a8d0       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   18 minutes ago      Running             kube-scheduler              1                   437843520db01       kube-scheduler-default-k8s-diff-port-441435
	21fe9e343c66d       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   18 minutes ago      Running             kube-apiserver              1                   4eda2cd5efb46       kube-apiserver-default-k8s-diff-port-441435
	64c15902266a0       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   18 minutes ago      Running             kube-controller-manager     1                   bc5ae73f49fb9       kube-controller-manager-default-k8s-diff-port-441435
	1d646ab3cd316       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   18 minutes ago      Running             etcd                        1                   2c0e0dcda36d5       etcd-default-k8s-diff-port-441435
	
	
	==> coredns [0c3a33e05e70970067727345c250b9acf323d2e519a614e24d23308c8701221b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55106 - 13743 "HINFO IN 6315080665969330000.7669187035379610219. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.038747375s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-441435
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-441435
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=528ef52dd808f925e881f79a2a823817d9197d47
	                    minikube.k8s.io/name=default-k8s-diff-port-441435
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_26T23_18_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 26 Sep 2025 23:18:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-441435
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 26 Sep 2025 23:39:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 26 Sep 2025 23:34:56 +0000   Fri, 26 Sep 2025 23:18:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 26 Sep 2025 23:34:56 +0000   Fri, 26 Sep 2025 23:18:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 26 Sep 2025 23:34:56 +0000   Fri, 26 Sep 2025 23:18:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 26 Sep 2025 23:34:56 +0000   Fri, 26 Sep 2025 23:19:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-441435
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 9615023ddc864d09988eb0bc06957254
	  System UUID:                837c0aba-5121-40c5-a1c3-287b72515219
	  Boot ID:                    ce94279f-6745-4217-952d-f9fda1755c22
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-66bc5c9577-2svp4                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     20m
	  kube-system                 etcd-default-k8s-diff-port-441435                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         20m
	  kube-system                 kindnet-qm5t5                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      20m
	  kube-system                 kube-apiserver-default-k8s-diff-port-441435             250m (3%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-441435    200m (2%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-proxy-9nbwg                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-scheduler-default-k8s-diff-port-441435             100m (1%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 metrics-server-746fcd58dc-n2fs6                         100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         19m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-w7gnq              0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-hjt5g                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 20m                kube-proxy       
	  Normal  Starting                 18m                kube-proxy       
	  Normal  NodeHasSufficientMemory  20m                kubelet          Node default-k8s-diff-port-441435 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m                kubelet          Node default-k8s-diff-port-441435 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m                kubelet          Node default-k8s-diff-port-441435 status is now: NodeHasSufficientPID
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           20m                node-controller  Node default-k8s-diff-port-441435 event: Registered Node default-k8s-diff-port-441435 in Controller
	  Normal  NodeReady                19m                kubelet          Node default-k8s-diff-port-441435 status is now: NodeReady
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-441435 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-441435 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-441435 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18m                node-controller  Node default-k8s-diff-port-441435 event: Registered Node default-k8s-diff-port-441435 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3e 48 6a f6 09 4b 08 06
	[ +10.903979] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a e8 7e d3 5b 72 08 06
	[  +0.000341] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff da 63 22 23 91 7c 08 06
	[  +0.001352] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa ad 0e 7e 3d 1a 08 06
	[ +32.901964] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 52 a1 ed 2f d7 08 06
	[  +0.000406] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e 48 6a f6 09 4b 08 06
	[Sep26 23:22] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 9e c1 c6 a4 45 cb 08 06
	[ +17.540919] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 96 9e d2 16 c1 17 08 06
	[  +0.001348] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a c7 2f db 7f 89 08 06
	[  +4.808582] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 46 38 a7 fc 6c f4 08 06
	[  +0.000403] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9e c1 c6 a4 45 cb 08 06
	[ +13.075040] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de e8 79 e9 6a 4e 08 06
	[  +0.000347] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 96 9e d2 16 c1 17 08 06
	
	
	==> etcd [1d646ab3cd316ea8612e146e135cdfea98b3e83b08e2488d4055108a2e9cd101] <==
	{"level":"info","ts":"2025-09-26T23:20:26.249199Z","caller":"traceutil/trace.go:172","msg":"trace[982243969] transaction","detail":"{read_only:false; response_revision:604; number_of_response:1; }","duration":"191.168923ms","start":"2025-09-26T23:20:26.058015Z","end":"2025-09-26T23:20:26.249184Z","steps":["trace[982243969] 'process raft request'  (duration: 191.043008ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T23:20:26.249356Z","caller":"traceutil/trace.go:172","msg":"trace[2016466082] transaction","detail":"{read_only:false; response_revision:603; number_of_response:1; }","duration":"191.353752ms","start":"2025-09-26T23:20:26.057991Z","end":"2025-09-26T23:20:26.249345Z","steps":["trace[2016466082] 'process raft request'  (duration: 44.225429ms)","trace[2016466082] 'compare'  (duration: 146.430808ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-26T23:20:42.890070Z","caller":"traceutil/trace.go:172","msg":"trace[1941064881] linearizableReadLoop","detail":"{readStateIndex:664; appliedIndex:664; }","duration":"104.059489ms","start":"2025-09-26T23:20:42.785982Z","end":"2025-09-26T23:20:42.890041Z","steps":["trace[1941064881] 'read index received'  (duration: 104.053033ms)","trace[1941064881] 'applied index is now lower than readState.Index'  (duration: 5.628µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-26T23:20:42.894309Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"108.306702ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-2svp4\" limit:1 ","response":"range_response_count:1 size:5928"}
	{"level":"info","ts":"2025-09-26T23:20:42.894365Z","caller":"traceutil/trace.go:172","msg":"trace[1715597443] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-2svp4; range_end:; response_count:1; response_revision:625; }","duration":"108.375408ms","start":"2025-09-26T23:20:42.785973Z","end":"2025-09-26T23:20:42.894348Z","steps":["trace[1715597443] 'agreement among raft nodes before linearized reading'  (duration: 104.13383ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T23:20:42.894367Z","caller":"traceutil/trace.go:172","msg":"trace[785398395] transaction","detail":"{read_only:false; response_revision:626; number_of_response:1; }","duration":"130.034059ms","start":"2025-09-26T23:20:42.764318Z","end":"2025-09-26T23:20:42.894352Z","steps":["trace[785398395] 'process raft request'  (duration: 125.794582ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T23:21:40.261770Z","caller":"traceutil/trace.go:172","msg":"trace[1456935280] linearizableReadLoop","detail":"{readStateIndex:762; appliedIndex:762; }","duration":"100.806589ms","start":"2025-09-26T23:21:40.160935Z","end":"2025-09-26T23:21:40.261741Z","steps":["trace[1456935280] 'read index received'  (duration: 100.798933ms)","trace[1456935280] 'applied index is now lower than readState.Index'  (duration: 6.684µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-26T23:21:40.261897Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.935154ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-26T23:21:40.261965Z","caller":"traceutil/trace.go:172","msg":"trace[16243359] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:711; }","duration":"101.009729ms","start":"2025-09-26T23:21:40.160927Z","end":"2025-09-26T23:21:40.261937Z","steps":["trace[16243359] 'agreement among raft nodes before linearized reading'  (duration: 100.898225ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T23:21:40.261975Z","caller":"traceutil/trace.go:172","msg":"trace[2060903208] transaction","detail":"{read_only:false; response_revision:712; number_of_response:1; }","duration":"129.905716ms","start":"2025-09-26T23:21:40.132042Z","end":"2025-09-26T23:21:40.261948Z","steps":["trace[2060903208] 'process raft request'  (duration: 129.75294ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-26T23:21:40.443063Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"107.730773ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-26T23:21:40.443160Z","caller":"traceutil/trace.go:172","msg":"trace[1149011100] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:712; }","duration":"107.841127ms","start":"2025-09-26T23:21:40.335302Z","end":"2025-09-26T23:21:40.443143Z","steps":["trace[1149011100] 'range keys from in-memory index tree'  (duration: 107.626066ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-26T23:22:11.595673Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.548576ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571765092064432610 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-746fcd58dc-n2fs6.1868f871dc670f2a\" mod_revision:677 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-746fcd58dc-n2fs6.1868f871dc670f2a\" value_size:694 lease:6571765092064432247 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-746fcd58dc-n2fs6.1868f871dc670f2a\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-09-26T23:22:11.595924Z","caller":"traceutil/trace.go:172","msg":"trace[133391157] transaction","detail":"{read_only:false; response_revision:749; number_of_response:1; }","duration":"129.324958ms","start":"2025-09-26T23:22:11.466574Z","end":"2025-09-26T23:22:11.595899Z","steps":["trace[133391157] 'compare'  (duration: 126.45573ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-26T23:22:12.815360Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"228.85881ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-26T23:22:12.815436Z","caller":"traceutil/trace.go:172","msg":"trace[1508898893] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:750; }","duration":"228.950283ms","start":"2025-09-26T23:22:12.586469Z","end":"2025-09-26T23:22:12.815419Z","steps":["trace[1508898893] 'range keys from in-memory index tree'  (duration: 228.759404ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-26T23:22:12.815368Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.572022ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.94.2\" limit:1 ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2025-09-26T23:22:12.815535Z","caller":"traceutil/trace.go:172","msg":"trace[104734176] range","detail":"{range_begin:/registry/masterleases/192.168.94.2; range_end:; response_count:1; response_revision:750; }","duration":"123.744618ms","start":"2025-09-26T23:22:12.691773Z","end":"2025-09-26T23:22:12.815518Z","steps":["trace[104734176] 'range keys from in-memory index tree'  (duration: 123.399395ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T23:22:13.200982Z","caller":"traceutil/trace.go:172","msg":"trace[547930826] transaction","detail":"{read_only:false; response_revision:753; number_of_response:1; }","duration":"118.418788ms","start":"2025-09-26T23:22:13.082527Z","end":"2025-09-26T23:22:13.200945Z","steps":["trace[547930826] 'process raft request'  (duration: 118.246235ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-26T23:30:19.597204Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":986}
	{"level":"info","ts":"2025-09-26T23:30:19.603657Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":986,"took":"6.105476ms","hash":3463923747,"current-db-size-bytes":3100672,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":3100672,"current-db-size-in-use":"3.1 MB"}
	{"level":"info","ts":"2025-09-26T23:30:19.603700Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3463923747,"revision":986,"compact-revision":-1}
	{"level":"info","ts":"2025-09-26T23:35:19.601690Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1268}
	{"level":"info","ts":"2025-09-26T23:35:19.604546Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1268,"took":"2.511389ms","hash":482805193,"current-db-size-bytes":3100672,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":1839104,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-09-26T23:35:19.604581Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":482805193,"revision":1268,"compact-revision":986}
	
	
	==> kernel <==
	 23:39:04 up  3:21,  0 users,  load average: 0.15, 0.32, 1.75
	Linux default-k8s-diff-port-441435 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [fd86762b4522b1c596fbe8975d894d466731398f03f76681cf932e1b4dfea904] <==
	I0926 23:37:02.331068       1 main.go:301] handling current node
	I0926 23:37:12.332222       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0926 23:37:12.332273       1 main.go:301] handling current node
	I0926 23:37:22.330438       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0926 23:37:22.330483       1 main.go:301] handling current node
	I0926 23:37:32.332136       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0926 23:37:32.332185       1 main.go:301] handling current node
	I0926 23:37:42.332840       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0926 23:37:42.332878       1 main.go:301] handling current node
	I0926 23:37:52.332223       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0926 23:37:52.332268       1 main.go:301] handling current node
	I0926 23:38:02.331230       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0926 23:38:02.331281       1 main.go:301] handling current node
	I0926 23:38:12.339316       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0926 23:38:12.339373       1 main.go:301] handling current node
	I0926 23:38:22.336573       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0926 23:38:22.336620       1 main.go:301] handling current node
	I0926 23:38:32.330376       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0926 23:38:32.330414       1 main.go:301] handling current node
	I0926 23:38:42.339459       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0926 23:38:42.339497       1 main.go:301] handling current node
	I0926 23:38:52.331371       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0926 23:38:52.331410       1 main.go:301] handling current node
	I0926 23:39:02.332657       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0926 23:39:02.332721       1 main.go:301] handling current node
	
	
	==> kube-apiserver [21fe9e343c66d826051000d442f68139ba6476d8d897627f45fbaa98c51cd141] <==
	I0926 23:35:21.572183       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0926 23:35:23.094761       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 23:36:16.558979       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0926 23:36:21.571791       1 handler_proxy.go:99] no RequestInfo found in the context
	E0926 23:36:21.571849       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0926 23:36:21.571869       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0926 23:36:21.572905       1 handler_proxy.go:99] no RequestInfo found in the context
	E0926 23:36:21.572995       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0926 23:36:21.573013       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0926 23:36:48.197528       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 23:37:18.911677       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 23:38:05.592278       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0926 23:38:20.239929       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0926 23:38:21.572764       1 handler_proxy.go:99] no RequestInfo found in the context
	E0926 23:38:21.572819       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0926 23:38:21.572837       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0926 23:38:21.573903       1 handler_proxy.go:99] no RequestInfo found in the context
	E0926 23:38:21.573991       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0926 23:38:21.574012       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [64c15902266a0928888ac2fd8de2a1006cf6d55f71d5294f6c4bdffe76988b44] <==
	I0926 23:32:55.158204       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0926 23:33:25.087144       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0926 23:33:25.166282       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0926 23:33:55.091200       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0926 23:33:55.173026       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0926 23:34:25.095720       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0926 23:34:25.180037       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0926 23:34:55.100131       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0926 23:34:55.188079       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0926 23:35:25.105896       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0926 23:35:25.195663       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0926 23:35:55.109445       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0926 23:35:55.203076       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0926 23:36:25.113900       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0926 23:36:25.209994       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0926 23:36:55.117841       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0926 23:36:55.216894       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0926 23:37:25.121942       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0926 23:37:25.224128       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0926 23:37:55.126723       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0926 23:37:55.230606       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0926 23:38:25.131598       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0926 23:38:25.238017       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0926 23:38:55.135914       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0926 23:38:55.244407       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [1fa78b4bbc3cc848294e5ae05f1aed1b0d0d54ffd25adc12d14666d297c4d06a] <==
	I0926 23:20:21.917321       1 server_linux.go:53] "Using iptables proxy"
	I0926 23:20:21.979131       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0926 23:20:22.080259       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0926 23:20:22.080307       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E0926 23:20:22.080385       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0926 23:20:22.100249       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0926 23:20:22.100326       1 server_linux.go:132] "Using iptables Proxier"
	I0926 23:20:22.105683       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0926 23:20:22.106548       1 server.go:527] "Version info" version="v1.34.0"
	I0926 23:20:22.106575       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 23:20:22.109072       1 config.go:309] "Starting node config controller"
	I0926 23:20:22.109104       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0926 23:20:22.109113       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0926 23:20:22.109350       1 config.go:403] "Starting serviceCIDR config controller"
	I0926 23:20:22.109359       1 config.go:200] "Starting service config controller"
	I0926 23:20:22.109362       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0926 23:20:22.109366       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0926 23:20:22.109382       1 config.go:106] "Starting endpoint slice config controller"
	I0926 23:20:22.109388       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0926 23:20:22.209477       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0926 23:20:22.209507       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0926 23:20:22.209485       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [e0edf08c6a8d03aecd750f5c9c512d8c79264b69e219947b80910ae37c4b980a] <==
	I0926 23:20:18.567543       1 serving.go:386] Generated self-signed cert in-memory
	W0926 23:20:20.544902       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0926 23:20:20.544941       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0926 23:20:20.544954       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0926 23:20:20.544986       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0926 23:20:20.586189       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0926 23:20:20.586294       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0926 23:20:20.590799       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0926 23:20:20.590937       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 23:20:20.590993       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0926 23:20:20.607719       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0926 23:20:20.631005       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 26 23:38:17 default-k8s-diff-port-441435 kubelet[689]: E0926 23:38:17.510280     689 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758929897510000466  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 26 23:38:19 default-k8s-diff-port-441435 kubelet[689]: I0926 23:38:19.365801     689 scope.go:117] "RemoveContainer" containerID="00975ddc9e68b268d1a9e9b60700f0330d14269c39d738f109b2f7b3fc6114a5"
	Sep 26 23:38:19 default-k8s-diff-port-441435 kubelet[689]: E0926 23:38:19.365991     689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-w7gnq_kubernetes-dashboard(d62b2960-6886-4571-8569-886c377485d9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w7gnq" podUID="d62b2960-6886-4571-8569-886c377485d9"
	Sep 26 23:38:19 default-k8s-diff-port-441435 kubelet[689]: E0926 23:38:19.366749     689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-n2fs6" podUID="94cb3f4e-8d94-49b7-849d-e5ae5a749431"
	Sep 26 23:38:24 default-k8s-diff-port-441435 kubelet[689]: E0926 23:38:24.366005     689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hjt5g" podUID="f3e14755-2887-45b0-be6b-6ce721ec83dc"
	Sep 26 23:38:27 default-k8s-diff-port-441435 kubelet[689]: E0926 23:38:27.511659     689 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758929907511386417  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 26 23:38:27 default-k8s-diff-port-441435 kubelet[689]: E0926 23:38:27.511705     689 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758929907511386417  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 26 23:38:32 default-k8s-diff-port-441435 kubelet[689]: I0926 23:38:32.365510     689 scope.go:117] "RemoveContainer" containerID="00975ddc9e68b268d1a9e9b60700f0330d14269c39d738f109b2f7b3fc6114a5"
	Sep 26 23:38:32 default-k8s-diff-port-441435 kubelet[689]: E0926 23:38:32.365671     689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-w7gnq_kubernetes-dashboard(d62b2960-6886-4571-8569-886c377485d9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w7gnq" podUID="d62b2960-6886-4571-8569-886c377485d9"
	Sep 26 23:38:32 default-k8s-diff-port-441435 kubelet[689]: E0926 23:38:32.366465     689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-n2fs6" podUID="94cb3f4e-8d94-49b7-849d-e5ae5a749431"
	Sep 26 23:38:37 default-k8s-diff-port-441435 kubelet[689]: E0926 23:38:37.367245     689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hjt5g" podUID="f3e14755-2887-45b0-be6b-6ce721ec83dc"
	Sep 26 23:38:37 default-k8s-diff-port-441435 kubelet[689]: E0926 23:38:37.513066     689 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758929917512804141  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 26 23:38:37 default-k8s-diff-port-441435 kubelet[689]: E0926 23:38:37.513117     689 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758929917512804141  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 26 23:38:43 default-k8s-diff-port-441435 kubelet[689]: I0926 23:38:43.367065     689 scope.go:117] "RemoveContainer" containerID="00975ddc9e68b268d1a9e9b60700f0330d14269c39d738f109b2f7b3fc6114a5"
	Sep 26 23:38:43 default-k8s-diff-port-441435 kubelet[689]: E0926 23:38:43.367241     689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-w7gnq_kubernetes-dashboard(d62b2960-6886-4571-8569-886c377485d9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w7gnq" podUID="d62b2960-6886-4571-8569-886c377485d9"
	Sep 26 23:38:44 default-k8s-diff-port-441435 kubelet[689]: E0926 23:38:44.365975     689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-n2fs6" podUID="94cb3f4e-8d94-49b7-849d-e5ae5a749431"
	Sep 26 23:38:47 default-k8s-diff-port-441435 kubelet[689]: E0926 23:38:47.514381     689 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758929927514076725  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 26 23:38:47 default-k8s-diff-port-441435 kubelet[689]: E0926 23:38:47.514423     689 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758929927514076725  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 26 23:38:51 default-k8s-diff-port-441435 kubelet[689]: E0926 23:38:51.367793     689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hjt5g" podUID="f3e14755-2887-45b0-be6b-6ce721ec83dc"
	Sep 26 23:38:56 default-k8s-diff-port-441435 kubelet[689]: I0926 23:38:56.365233     689 scope.go:117] "RemoveContainer" containerID="00975ddc9e68b268d1a9e9b60700f0330d14269c39d738f109b2f7b3fc6114a5"
	Sep 26 23:38:56 default-k8s-diff-port-441435 kubelet[689]: E0926 23:38:56.365421     689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-w7gnq_kubernetes-dashboard(d62b2960-6886-4571-8569-886c377485d9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-w7gnq" podUID="d62b2960-6886-4571-8569-886c377485d9"
	Sep 26 23:38:57 default-k8s-diff-port-441435 kubelet[689]: E0926 23:38:57.515541     689 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758929937515260681  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 26 23:38:57 default-k8s-diff-port-441435 kubelet[689]: E0926 23:38:57.515580     689 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758929937515260681  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177772}  inodes_used:{value:67}}"
	Sep 26 23:38:59 default-k8s-diff-port-441435 kubelet[689]: E0926 23:38:59.366184     689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-n2fs6" podUID="94cb3f4e-8d94-49b7-849d-e5ae5a749431"
	Sep 26 23:39:04 default-k8s-diff-port-441435 kubelet[689]: E0926 23:39:04.366650     689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hjt5g" podUID="f3e14755-2887-45b0-be6b-6ce721ec83dc"
	
	
	==> storage-provisioner [96a7a432e179ab0bb5840b3bbbd120003b450916d554d40c62aa9613d6afe25a] <==
	I0926 23:20:21.910853       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0926 23:20:51.914552       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [9bf11225358a2ee7799f2fb960a31d828a307b994fa48ec323c5f0c2fb6be477] <==
	W0926 23:38:40.227476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:38:42.230723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:38:42.234995       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:38:44.238225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:38:44.242714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:38:46.245835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:38:46.250405       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:38:48.253930       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:38:48.258849       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:38:50.261658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:38:50.265598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:38:52.268937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:38:52.272754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:38:54.276300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:38:54.280362       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:38:56.283671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:38:56.287847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:38:58.291301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:38:58.295491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:39:00.298858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:39:00.302816       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:39:02.306592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:39:02.311556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:39:04.314628       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0926 23:39:04.318709       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-441435 -n default-k8s-diff-port-441435
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-441435 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-n2fs6 kubernetes-dashboard-855c9754f9-hjt5g
helpers_test.go:282: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-441435 describe pod metrics-server-746fcd58dc-n2fs6 kubernetes-dashboard-855c9754f9-hjt5g
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-441435 describe pod metrics-server-746fcd58dc-n2fs6 kubernetes-dashboard-855c9754f9-hjt5g: exit status 1 (58.393651ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-n2fs6" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-hjt5g" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context default-k8s-diff-port-441435 describe pod metrics-server-746fcd58dc-n2fs6 kubernetes-dashboard-855c9754f9-hjt5g: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.54s)

                                                
                                    

Test pass (288/326)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.01
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.0/json-events 4.16
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.07
18 TestDownloadOnly/v1.34.0/DeleteAll 0.21
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 1.2
21 TestBinaryMirror 0.82
22 TestOffline 53.59
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 157.04
31 TestAddons/serial/GCPAuth/Namespaces 0.16
32 TestAddons/serial/GCPAuth/FakeCredentials 9.49
35 TestAddons/parallel/Registry 18.81
36 TestAddons/parallel/RegistryCreds 0.7
38 TestAddons/parallel/InspektorGadget 6.33
39 TestAddons/parallel/MetricsServer 5.68
41 TestAddons/parallel/CSI 41.17
42 TestAddons/parallel/Headlamp 17.68
43 TestAddons/parallel/CloudSpanner 5.52
44 TestAddons/parallel/LocalPath 50.64
45 TestAddons/parallel/NvidiaDevicePlugin 6.48
46 TestAddons/parallel/Yakd 10.74
47 TestAddons/parallel/AmdGpuDevicePlugin 5.49
48 TestAddons/StoppedEnableDisable 18.48
49 TestCertOptions 24.88
50 TestCertExpiration 227.72
52 TestForceSystemdFlag 23.43
53 TestForceSystemdEnv 34.55
55 TestKVMDriverInstallOrUpdate 0.57
59 TestErrorSpam/setup 22.41
60 TestErrorSpam/start 0.64
61 TestErrorSpam/status 0.92
62 TestErrorSpam/pause 1.47
63 TestErrorSpam/unpause 1.56
64 TestErrorSpam/stop 8.02
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 70.13
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 6.33
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.08
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.83
76 TestFunctional/serial/CacheCmd/cache/add_local 0.97
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.76
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.12
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 38.15
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.44
87 TestFunctional/serial/LogsFileCmd 1.46
88 TestFunctional/serial/InvalidService 4.53
90 TestFunctional/parallel/ConfigCmd 0.38
92 TestFunctional/parallel/DryRun 0.38
93 TestFunctional/parallel/InternationalLanguage 0.17
94 TestFunctional/parallel/StatusCmd 0.9
99 TestFunctional/parallel/AddonsCmd 0.13
100 TestFunctional/parallel/PersistentVolumeClaim 22.46
102 TestFunctional/parallel/SSHCmd 0.58
103 TestFunctional/parallel/CpCmd 1.82
105 TestFunctional/parallel/FileSync 0.26
106 TestFunctional/parallel/CertSync 1.82
110 TestFunctional/parallel/NodeLabels 0.06
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.52
114 TestFunctional/parallel/License 0.21
115 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
116 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
117 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
120 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.44
121 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
123 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.22
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
125 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
129 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
130 TestFunctional/parallel/Version/short 0.05
131 TestFunctional/parallel/Version/components 0.49
132 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
133 TestFunctional/parallel/ProfileCmd/profile_list 0.37
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
135 TestFunctional/parallel/MountCmd/any-port 7.7
136 TestFunctional/parallel/MountCmd/specific-port 1.86
137 TestFunctional/parallel/MountCmd/VerifyCleanup 1.92
138 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
139 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
140 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
141 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
142 TestFunctional/parallel/ImageCommands/ImageBuild 2.89
143 TestFunctional/parallel/ImageCommands/Setup 0.4
144 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.29
145 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.89
146 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.04
147 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.51
148 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
149 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.76
150 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.56
151 TestFunctional/parallel/ServiceCmd/List 1.7
152 TestFunctional/parallel/ServiceCmd/JSONOutput 1.71
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 135.69
164 TestMultiControlPlane/serial/DeployApp 5.68
165 TestMultiControlPlane/serial/PingHostFromPods 1.13
166 TestMultiControlPlane/serial/AddWorkerNode 54.1
167 TestMultiControlPlane/serial/NodeLabels 0.07
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.88
169 TestMultiControlPlane/serial/CopyFile 16.7
170 TestMultiControlPlane/serial/StopSecondaryNode 19.78
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.72
172 TestMultiControlPlane/serial/RestartSecondaryNode 8.82
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.88
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 111.85
175 TestMultiControlPlane/serial/DeleteSecondaryNode 11.34
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.68
177 TestMultiControlPlane/serial/StopCluster 41.19
178 TestMultiControlPlane/serial/RestartCluster 58.34
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.67
180 TestMultiControlPlane/serial/AddSecondaryNode 66.87
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.88
185 TestJSONOutput/start/Command 69.55
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.67
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.65
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 6.09
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.2
210 TestKicCustomNetwork/create_custom_network 31.08
211 TestKicCustomNetwork/use_default_bridge_network 23.5
212 TestKicExistingNetwork 24.22
213 TestKicCustomSubnet 25.08
214 TestKicStaticIP 23.56
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 47.95
219 TestMountStart/serial/StartWithMountFirst 5.2
220 TestMountStart/serial/VerifyMountFirst 0.26
221 TestMountStart/serial/StartWithMountSecond 5.59
222 TestMountStart/serial/VerifyMountSecond 0.26
223 TestMountStart/serial/DeleteFirst 1.64
224 TestMountStart/serial/VerifyMountPostDelete 0.26
225 TestMountStart/serial/Stop 1.19
226 TestMountStart/serial/RestartStopped 7.28
227 TestMountStart/serial/VerifyMountPostStop 0.25
230 TestMultiNode/serial/FreshStart2Nodes 94.59
231 TestMultiNode/serial/DeployApp2Nodes 4.93
232 TestMultiNode/serial/PingHostFrom2Pods 0.75
233 TestMultiNode/serial/AddNode 53.86
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.65
236 TestMultiNode/serial/CopyFile 9.41
237 TestMultiNode/serial/StopNode 2.24
238 TestMultiNode/serial/StartAfterStop 7.26
239 TestMultiNode/serial/RestartKeepsNodes 76.41
240 TestMultiNode/serial/DeleteNode 5.22
241 TestMultiNode/serial/StopMultiNode 28.6
242 TestMultiNode/serial/RestartMultiNode 46.13
243 TestMultiNode/serial/ValidateNameConflict 23.79
248 TestPreload 108.51
250 TestScheduledStopUnix 95.69
253 TestInsufficientStorage 9.37
254 TestRunningBinaryUpgrade 45.53
256 TestKubernetesUpgrade 301.58
257 TestMissingContainerUpgrade 84.57
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
260 TestNoKubernetes/serial/StartWithK8s 32.74
261 TestNoKubernetes/serial/StartWithStopK8s 20.97
262 TestNoKubernetes/serial/Start 10.88
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
264 TestNoKubernetes/serial/ProfileList 1.76
265 TestNoKubernetes/serial/Stop 1.2
266 TestNoKubernetes/serial/StartNoArgs 6.51
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
268 TestStoppedBinaryUpgrade/Setup 0.5
269 TestStoppedBinaryUpgrade/Upgrade 36.45
270 TestStoppedBinaryUpgrade/MinikubeLogs 1.03
279 TestPause/serial/Start 70.08
280 TestPause/serial/SecondStartNoReconfiguration 7.32
281 TestPause/serial/Pause 0.64
282 TestPause/serial/VerifyStatus 0.31
283 TestPause/serial/Unpause 0.63
284 TestPause/serial/PauseAgain 0.7
285 TestPause/serial/DeletePaused 2.71
286 TestPause/serial/VerifyDeletedResources 13.79
294 TestNetworkPlugins/group/false 3.5
296 TestStartStop/group/old-k8s-version/serial/FirstStart 52.45
301 TestStartStop/group/embed-certs/serial/FirstStart 40.11
303 TestStartStop/group/no-preload/serial/FirstStart 54.64
304 TestStartStop/group/embed-certs/serial/DeployApp 9.26
305 TestStartStop/group/old-k8s-version/serial/DeployApp 10.27
306 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.82
307 TestStartStop/group/embed-certs/serial/Stop 18.18
308 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.93
309 TestStartStop/group/old-k8s-version/serial/Stop 16.12
310 TestStartStop/group/no-preload/serial/DeployApp 9.26
311 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.17
312 TestStartStop/group/embed-certs/serial/SecondStart 47.72
313 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.83
314 TestStartStop/group/no-preload/serial/Stop 16.69
315 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
316 TestStartStop/group/old-k8s-version/serial/SecondStart 48.03
317 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
318 TestStartStop/group/no-preload/serial/SecondStart 48.69
319 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
320 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
321 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
322 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
323 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
324 TestStartStop/group/embed-certs/serial/Pause 3.02
325 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
327 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 70.15
328 TestStartStop/group/old-k8s-version/serial/Pause 3.18
330 TestStartStop/group/newest-cni/serial/FirstStart 33.76
331 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
332 TestNetworkPlugins/group/auto/Start 71.73
333 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
334 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
335 TestStartStop/group/no-preload/serial/Pause 3.38
336 TestNetworkPlugins/group/kindnet/Start 70.95
337 TestStartStop/group/newest-cni/serial/DeployApp 0
338 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.84
339 TestStartStop/group/newest-cni/serial/Stop 2.44
340 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
341 TestStartStop/group/newest-cni/serial/SecondStart 11.03
342 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
344 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
345 TestStartStop/group/newest-cni/serial/Pause 2.61
347 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.31
348 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.89
349 TestStartStop/group/default-k8s-diff-port/serial/Stop 18.12
350 TestNetworkPlugins/group/auto/KubeletFlags 0.37
351 TestNetworkPlugins/group/auto/NetCatPod 8.27
352 TestNetworkPlugins/group/auto/DNS 0.14
353 TestNetworkPlugins/group/auto/Localhost 0.13
354 TestNetworkPlugins/group/auto/HairPin 0.14
355 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
356 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
357 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 50.6
358 TestNetworkPlugins/group/kindnet/KubeletFlags 0.31
359 TestNetworkPlugins/group/kindnet/NetCatPod 8.21
360 TestNetworkPlugins/group/kindnet/DNS 0.17
361 TestNetworkPlugins/group/kindnet/Localhost 0.16
362 TestNetworkPlugins/group/kindnet/HairPin 0.12
363 TestNetworkPlugins/group/custom-flannel/Start 46.24
364 TestNetworkPlugins/group/enable-default-cni/Start 59.71
366 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
367 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.18
368 TestNetworkPlugins/group/custom-flannel/DNS 0.13
369 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
370 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
371 TestNetworkPlugins/group/flannel/Start 46.44
372 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.39
373 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.84
374 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
375 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
376 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
377 TestNetworkPlugins/group/bridge/Start 32.98
378 TestNetworkPlugins/group/flannel/ControllerPod 6.01
379 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
380 TestNetworkPlugins/group/flannel/NetCatPod 9.17
381 TestNetworkPlugins/group/flannel/DNS 0.13
382 TestNetworkPlugins/group/flannel/Localhost 0.11
383 TestNetworkPlugins/group/flannel/HairPin 0.11
384 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
385 TestNetworkPlugins/group/bridge/NetCatPod 9.18
386 TestNetworkPlugins/group/bridge/DNS 0.14
387 TestNetworkPlugins/group/bridge/Localhost 0.13
388 TestNetworkPlugins/group/bridge/HairPin 0.12
390 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
391 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.63
x
+
TestDownloadOnly/v1.28.0/json-events (5.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-054392 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-054392 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.006363186s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0926 22:29:27.138915  212137 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
I0926 22:29:27.139147  212137 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21642-208519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-054392
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-054392: exit status 85 (66.782454ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-054392 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-054392 │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/26 22:29:22
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 22:29:22.176478  212149 out.go:360] Setting OutFile to fd 1 ...
	I0926 22:29:22.176730  212149 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:29:22.176738  212149 out.go:374] Setting ErrFile to fd 2...
	I0926 22:29:22.176742  212149 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:29:22.176960  212149 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-208519/.minikube/bin
	W0926 22:29:22.177101  212149 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21642-208519/.minikube/config/config.json: open /home/jenkins/minikube-integration/21642-208519/.minikube/config/config.json: no such file or directory
	I0926 22:29:22.177609  212149 out.go:368] Setting JSON to true
	I0926 22:29:22.178626  212149 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":7911,"bootTime":1758917851,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 22:29:22.178728  212149 start.go:140] virtualization: kvm guest
	I0926 22:29:22.181315  212149 out.go:99] [download-only-054392] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W0926 22:29:22.181498  212149 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21642-208519/.minikube/cache/preloaded-tarball: no such file or directory
	I0926 22:29:22.181543  212149 notify.go:220] Checking for updates...
	I0926 22:29:22.183067  212149 out.go:171] MINIKUBE_LOCATION=21642
	I0926 22:29:22.185049  212149 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 22:29:22.186784  212149 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21642-208519/kubeconfig
	I0926 22:29:22.188439  212149 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-208519/.minikube
	I0926 22:29:22.189818  212149 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0926 22:29:22.195245  212149 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0926 22:29:22.195692  212149 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 22:29:22.219625  212149 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0926 22:29:22.219758  212149 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 22:29:22.621602  212149 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-26 22:29:22.609587368 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 22:29:22.621801  212149 docker.go:318] overlay module found
	I0926 22:29:22.623569  212149 out.go:99] Using the docker driver based on user configuration
	I0926 22:29:22.623610  212149 start.go:304] selected driver: docker
	I0926 22:29:22.623618  212149 start.go:924] validating driver "docker" against <nil>
	I0926 22:29:22.623798  212149 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 22:29:22.682830  212149 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-26 22:29:22.672610804 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 22:29:22.683018  212149 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0926 22:29:22.683530  212149 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0926 22:29:22.683697  212149 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0926 22:29:22.685571  212149 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-054392 host does not exist
	  To start a cluster, run: "minikube start -p download-only-054392"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-054392
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (4.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-148703 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-148703 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.154691787s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (4.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0926 22:29:31.702282  212137 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
I0926 22:29:31.702325  212137 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21642-208519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-148703
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-148703: exit status 85 (66.423688ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-054392 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-054392 │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │ 26 Sep 25 22:29 UTC │
	│ delete  │ -p download-only-054392                                                                                                                                                   │ download-only-054392 │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │ 26 Sep 25 22:29 UTC │
	│ start   │ -o=json --download-only -p download-only-148703 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-148703 │ jenkins │ v1.37.0 │ 26 Sep 25 22:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/26 22:29:27
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 22:29:27.589476  212497 out.go:360] Setting OutFile to fd 1 ...
	I0926 22:29:27.589755  212497 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:29:27.589764  212497 out.go:374] Setting ErrFile to fd 2...
	I0926 22:29:27.589768  212497 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:29:27.589970  212497 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-208519/.minikube/bin
	I0926 22:29:27.590459  212497 out.go:368] Setting JSON to true
	I0926 22:29:27.591314  212497 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":7917,"bootTime":1758917851,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 22:29:27.591409  212497 start.go:140] virtualization: kvm guest
	I0926 22:29:27.593464  212497 out.go:99] [download-only-148703] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0926 22:29:27.593587  212497 notify.go:220] Checking for updates...
	I0926 22:29:27.594892  212497 out.go:171] MINIKUBE_LOCATION=21642
	I0926 22:29:27.596254  212497 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 22:29:27.597421  212497 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21642-208519/kubeconfig
	I0926 22:29:27.598536  212497 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-208519/.minikube
	I0926 22:29:27.599806  212497 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0926 22:29:27.602047  212497 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0926 22:29:27.602308  212497 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 22:29:27.626338  212497 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0926 22:29:27.626465  212497 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 22:29:27.682936  212497 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-26 22:29:27.673452017 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 22:29:27.683063  212497 docker.go:318] overlay module found
	I0926 22:29:27.684858  212497 out.go:99] Using the docker driver based on user configuration
	I0926 22:29:27.684885  212497 start.go:304] selected driver: docker
	I0926 22:29:27.684891  212497 start.go:924] validating driver "docker" against <nil>
	I0926 22:29:27.684977  212497 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 22:29:27.743516  212497 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-26 22:29:27.733541575 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 22:29:27.743714  212497 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0926 22:29:27.744236  212497 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0926 22:29:27.744378  212497 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0926 22:29:27.746098  212497 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-148703 host does not exist
	  To start a cluster, run: "minikube start -p download-only-148703"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-148703
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.2s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-419889 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-419889" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-419889
--- PASS: TestDownloadOnlyKic (1.20s)

                                                
                                    
x
+
TestBinaryMirror (0.82s)

                                                
                                                
=== RUN   TestBinaryMirror
I0926 22:29:33.591116  212137 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-540712 --alsologtostderr --binary-mirror http://127.0.0.1:37555 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-540712" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-540712
--- PASS: TestBinaryMirror (0.82s)

                                                
                                    
x
+
TestOffline (53.59s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-739910 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-739910 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (50.868885349s)
helpers_test.go:175: Cleaning up "offline-crio-739910" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-739910
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-739910: (2.722880626s)
--- PASS: TestOffline (53.59s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-341571
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-341571: exit status 85 (57.934654ms)

                                                
                                                
-- stdout --
	* Profile "addons-341571" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-341571"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-341571
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-341571: exit status 85 (58.553137ms)

                                                
                                                
-- stdout --
	* Profile "addons-341571" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-341571"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (157.04s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-341571 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-341571 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m37.034703634s)
--- PASS: TestAddons/Setup (157.04s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-341571 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-341571 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.49s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-341571 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-341571 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [374fa4b1-e64e-435c-b21f-3d388ae49e2b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [374fa4b1-e64e-435c-b21f-3d388ae49e2b] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003818904s
addons_test.go:694: (dbg) Run:  kubectl --context addons-341571 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-341571 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-341571 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.49s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.420072ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-ctjkw" [8a0f3fc0-2d0c-464a-9a56-61049fbb2336] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.002866171s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-tzv57" [d80fb770-8ba3-4d49-9f0e-c1525a809861] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003909865s
addons_test.go:392: (dbg) Run:  kubectl --context addons-341571 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-341571 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-341571 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.000783319s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-341571 ip
2025/09/26 22:32:47 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-341571 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.81s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.7s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 2.708489ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-341571
addons_test.go:332: (dbg) Run:  kubectl --context addons-341571 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-341571 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.70s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.33s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-nn52s" [3a6296ff-a81a-4f85-8bcb-68222d21f853] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003921662s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-341571 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.33s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.68s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 2.947931ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-t56ww" [6a75dcb7-123e-4f50-8ff9-89899540cfc4] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003789772s
addons_test.go:463: (dbg) Run:  kubectl --context addons-341571 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-341571 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.68s)

                                                
                                    
x
+
TestAddons/parallel/CSI (41.17s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0926 22:32:36.254369  212137 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0926 22:32:36.257866  212137 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0926 22:32:36.257900  212137 kapi.go:107] duration metric: took 3.548418ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.561632ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-341571 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341571 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341571 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341571 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-341571 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [8c5d274f-bd2c-480f-89ae-b5d41df2c6fe] Pending
helpers_test.go:352: "task-pv-pod" [8c5d274f-bd2c-480f-89ae-b5d41df2c6fe] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [8c5d274f-bd2c-480f-89ae-b5d41df2c6fe] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.003915417s
addons_test.go:572: (dbg) Run:  kubectl --context addons-341571 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-341571 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-341571 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-341571 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-341571 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-341571 delete pod task-pv-pod: (1.167635599s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-341571 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-341571 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341571 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341571 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341571 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341571 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341571 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341571 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341571 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341571 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-341571 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [c0a8179c-7041-4106-a251-ee8fecdfb6fa] Pending
helpers_test.go:352: "task-pv-pod-restore" [c0a8179c-7041-4106-a251-ee8fecdfb6fa] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [c0a8179c-7041-4106-a251-ee8fecdfb6fa] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004125149s
addons_test.go:614: (dbg) Run:  kubectl --context addons-341571 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-341571 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-341571 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-341571 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-341571 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-341571 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.577213301s)
--- PASS: TestAddons/parallel/CSI (41.17s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-341571 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-tqkrb" [aac1692a-40a2-47af-9f18-d26ab900cb6b] Pending
helpers_test.go:352: "headlamp-85f8f8dc54-tqkrb" [aac1692a-40a2-47af-9f18-d26ab900cb6b] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-tqkrb" [aac1692a-40a2-47af-9f18-d26ab900cb6b] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.067439684s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-341571 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-341571 addons disable headlamp --alsologtostderr -v=1: (5.802295069s)
--- PASS: TestAddons/parallel/Headlamp (17.68s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-hwzkv" [854cbf8a-6289-4cd6-aada-9075e6054911] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003877904s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-341571 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (50.64s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-341571 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-341571 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341571 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341571 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341571 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341571 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-341571 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [5fe95d4f-25d5-47b7-8dfa-b5588ece9d37] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [5fe95d4f-25d5-47b7-8dfa-b5588ece9d37] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [5fe95d4f-25d5-47b7-8dfa-b5588ece9d37] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004262883s
addons_test.go:967: (dbg) Run:  kubectl --context addons-341571 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-341571 ssh "cat /opt/local-path-provisioner/pvc-49272039-2e34-4194-b5f2-2d7b41dce849_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-341571 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-341571 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-341571 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-341571 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.765980102s)
--- PASS: TestAddons/parallel/LocalPath (50.64s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.48s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-8j546" [471abb6b-3a33-47a4-96ae-a4fd74359b94] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003652562s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-341571 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.48s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-98xk5" [2e8740f7-5990-488d-a3a1-9fbd465bf3a0] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.00626611s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-341571 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-341571 addons disable yakd --alsologtostderr -v=1: (5.728129236s)
--- PASS: TestAddons/parallel/Yakd (10.74s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.49s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-5w42z" [286d2d17-a9e5-4aed-8459-4513ce47bc96] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003560545s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-341571 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.49s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (18.48s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-341571
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-341571: (18.221426497s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-341571
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-341571
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-341571
--- PASS: TestAddons/StoppedEnableDisable (18.48s)

                                                
                                    
x
+
TestCertOptions (24.88s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-118260 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-118260 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (21.911458572s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-118260 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-118260 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-118260 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-118260" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-118260
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-118260: (2.370016425s)
--- PASS: TestCertOptions (24.88s)

                                                
                                    
x
+
TestCertExpiration (227.72s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-778862 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-778862 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (34.782083167s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-778862 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-778862 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (9.482483253s)
helpers_test.go:175: Cleaning up "cert-expiration-778862" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-778862
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-778862: (3.452767416s)
--- PASS: TestCertExpiration (227.72s)

                                                
                                    
x
+
TestForceSystemdFlag (23.43s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-233200 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-233200 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (20.788281237s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-233200 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-233200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-233200
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-233200: (2.371113327s)
--- PASS: TestForceSystemdFlag (23.43s)

                                                
                                    
x
+
TestForceSystemdEnv (34.55s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-763916 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-763916 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (31.975639963s)
helpers_test.go:175: Cleaning up "force-systemd-env-763916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-763916
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-763916: (2.571894015s)
--- PASS: TestForceSystemdEnv (34.55s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0.57s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0926 23:16:12.770458  212137 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0926 23:16:12.770615  212137 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate2545799474/001:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0926 23:16:12.807797  212137 install.go:163] /tmp/TestKVMDriverInstallOrUpdate2545799474/001/docker-machine-driver-kvm2 version is 1.1.1
W0926 23:16:12.807851  212137 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W0926 23:16:12.807977  212137 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0926 23:16:12.808024  212137 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2545799474/001/docker-machine-driver-kvm2
I0926 23:16:13.178906  212137 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate2545799474/001:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0926 23:16:13.197854  212137 install.go:163] /tmp/TestKVMDriverInstallOrUpdate2545799474/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (0.57s)

                                                
                                    
x
+
TestErrorSpam/setup (22.41s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-054188 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-054188 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-054188 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-054188 --driver=docker  --container-runtime=crio: (22.411924509s)
--- PASS: TestErrorSpam/setup (22.41s)

                                                
                                    
x
+
TestErrorSpam/start (0.64s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-054188 --log_dir /tmp/nospam-054188 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-054188 --log_dir /tmp/nospam-054188 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-054188 --log_dir /tmp/nospam-054188 start --dry-run
--- PASS: TestErrorSpam/start (0.64s)

                                                
                                    
x
+
TestErrorSpam/status (0.92s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-054188 --log_dir /tmp/nospam-054188 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-054188 --log_dir /tmp/nospam-054188 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-054188 --log_dir /tmp/nospam-054188 status
--- PASS: TestErrorSpam/status (0.92s)

                                                
                                    
x
+
TestErrorSpam/pause (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-054188 --log_dir /tmp/nospam-054188 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-054188 --log_dir /tmp/nospam-054188 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-054188 --log_dir /tmp/nospam-054188 pause
--- PASS: TestErrorSpam/pause (1.47s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.56s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-054188 --log_dir /tmp/nospam-054188 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-054188 --log_dir /tmp/nospam-054188 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-054188 --log_dir /tmp/nospam-054188 unpause
--- PASS: TestErrorSpam/unpause (1.56s)

                                                
                                    
x
+
TestErrorSpam/stop (8.02s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-054188 --log_dir /tmp/nospam-054188 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-054188 --log_dir /tmp/nospam-054188 stop: (7.831546227s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-054188 --log_dir /tmp/nospam-054188 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-054188 --log_dir /tmp/nospam-054188 stop
--- PASS: TestErrorSpam/stop (8.02s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21642-208519/.minikube/files/etc/test/nested/copy/212137/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (70.13s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-383702 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0926 22:37:12.149355  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:37:12.155796  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:37:12.167263  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:37:12.188709  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:37:12.230169  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:37:12.311609  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:37:12.473181  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:37:12.794894  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:37:13.436807  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:37:14.718441  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:37:17.281330  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:37:22.404116  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:37:32.645952  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-383702 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m10.131900491s)
--- PASS: TestFunctional/serial/StartWithProxy (70.13s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.33s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0926 22:37:45.117407  212137 config.go:182] Loaded profile config "functional-383702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-383702 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-383702 --alsologtostderr -v=8: (6.331188794s)
functional_test.go:678: soft start took 6.332177195s for "functional-383702" cluster.
I0926 22:37:51.449417  212137 config.go:182] Loaded profile config "functional-383702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (6.33s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-383702 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.83s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 cache add registry.k8s.io/pause:3.3
E0926 22:37:53.128309  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.83s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.97s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-383702 /tmp/TestFunctionalserialCacheCmdcacheadd_local1575342316/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 cache add minikube-local-cache-test:functional-383702
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 cache delete minikube-local-cache-test:functional-383702
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-383702
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.97s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.76s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-383702 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (283.550323ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.76s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 kubectl -- --context functional-383702 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-383702 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.15s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-383702 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0926 22:38:34.091552  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-383702 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.146959717s)
functional_test.go:776: restart took 38.147079658s for "functional-383702" cluster.
I0926 22:38:36.005753  212137 config.go:182] Loaded profile config "functional-383702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (38.15s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-383702 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-383702 logs: (1.439069709s)
--- PASS: TestFunctional/serial/LogsCmd (1.44s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 logs --file /tmp/TestFunctionalserialLogsFileCmd2536091467/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-383702 logs --file /tmp/TestFunctionalserialLogsFileCmd2536091467/001/logs.txt: (1.456295191s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.53s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-383702 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-383702
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-383702: exit status 115 (406.037864ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31955 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-383702 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.53s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-383702 config get cpus: exit status 14 (83.820298ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-383702 config get cpus: exit status 14 (52.50908ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-383702 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-383702 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (154.591056ms)

                                                
                                                
-- stdout --
	* [functional-383702] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21642-208519/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-208519/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 22:39:07.829560  253225 out.go:360] Setting OutFile to fd 1 ...
	I0926 22:39:07.829829  253225 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:39:07.829837  253225 out.go:374] Setting ErrFile to fd 2...
	I0926 22:39:07.829841  253225 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:39:07.830023  253225 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-208519/.minikube/bin
	I0926 22:39:07.830484  253225 out.go:368] Setting JSON to false
	I0926 22:39:07.831457  253225 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":8497,"bootTime":1758917851,"procs":244,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 22:39:07.831553  253225 start.go:140] virtualization: kvm guest
	I0926 22:39:07.833779  253225 out.go:179] * [functional-383702] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0926 22:39:07.835127  253225 notify.go:220] Checking for updates...
	I0926 22:39:07.835141  253225 out.go:179]   - MINIKUBE_LOCATION=21642
	I0926 22:39:07.836372  253225 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 22:39:07.837622  253225 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21642-208519/kubeconfig
	I0926 22:39:07.838997  253225 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-208519/.minikube
	I0926 22:39:07.840329  253225 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0926 22:39:07.841756  253225 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 22:39:07.843441  253225 config.go:182] Loaded profile config "functional-383702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 22:39:07.843910  253225 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 22:39:07.868304  253225 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0926 22:39:07.868438  253225 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 22:39:07.924736  253225 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-09-26 22:39:07.914063355 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 22:39:07.924852  253225 docker.go:318] overlay module found
	I0926 22:39:07.926651  253225 out.go:179] * Using the docker driver based on existing profile
	I0926 22:39:07.927917  253225 start.go:304] selected driver: docker
	I0926 22:39:07.927934  253225 start.go:924] validating driver "docker" against &{Name:functional-383702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-383702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 22:39:07.928017  253225 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 22:39:07.931529  253225 out.go:203] 
	W0926 22:39:07.932953  253225 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0926 22:39:07.934231  253225 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-383702 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-383702 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-383702 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (172.140425ms)

                                                
                                                
-- stdout --
	* [functional-383702] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21642-208519/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-208519/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 22:39:08.213339  253530 out.go:360] Setting OutFile to fd 1 ...
	I0926 22:39:08.213599  253530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:39:08.213609  253530 out.go:374] Setting ErrFile to fd 2...
	I0926 22:39:08.213614  253530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:39:08.213934  253530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-208519/.minikube/bin
	I0926 22:39:08.214449  253530 out.go:368] Setting JSON to false
	I0926 22:39:08.215621  253530 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":8497,"bootTime":1758917851,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 22:39:08.215714  253530 start.go:140] virtualization: kvm guest
	I0926 22:39:08.217535  253530 out.go:179] * [functional-383702] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0926 22:39:08.219219  253530 out.go:179]   - MINIKUBE_LOCATION=21642
	I0926 22:39:08.219220  253530 notify.go:220] Checking for updates...
	I0926 22:39:08.220685  253530 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 22:39:08.222326  253530 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21642-208519/kubeconfig
	I0926 22:39:08.223663  253530 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-208519/.minikube
	I0926 22:39:08.224967  253530 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0926 22:39:08.226240  253530 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 22:39:08.227804  253530 config.go:182] Loaded profile config "functional-383702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 22:39:08.228421  253530 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 22:39:08.256215  253530 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0926 22:39:08.256361  253530 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 22:39:08.320575  253530 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-09-26 22:39:08.309855559 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 22:39:08.320680  253530 docker.go:318] overlay module found
	I0926 22:39:08.323360  253530 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0926 22:39:08.324648  253530 start.go:304] selected driver: docker
	I0926 22:39:08.324684  253530 start.go:924] validating driver "docker" against &{Name:functional-383702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-383702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 22:39:08.324792  253530 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 22:39:08.326812  253530 out.go:203] 
	W0926 22:39:08.329163  253530 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0926 22:39:08.330497  253530 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (22.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [ce4b36b6-71f0-4103-ac84-6583315e4b43] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004436602s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-383702 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-383702 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-383702 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-383702 apply -f testdata/storage-provisioner/pod.yaml
I0926 22:38:50.685772  212137 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [8a26fe35-db09-4cf0-8e82-f148846cd7ad] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [8a26fe35-db09-4cf0-8e82-f148846cd7ad] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.014322371s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-383702 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-383702 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-383702 apply -f testdata/storage-provisioner/pod.yaml
I0926 22:39:00.667179  212137 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [41b4b17f-7903-40d6-bdaf-340d9cb8a12c] Pending
helpers_test.go:352: "sp-pod" [41b4b17f-7903-40d6-bdaf-340d9cb8a12c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [41b4b17f-7903-40d6-bdaf-340d9cb8a12c] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004103617s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-383702 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (22.46s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 ssh -n functional-383702 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 cp functional-383702:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1106116078/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 ssh -n functional-383702 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 ssh -n functional-383702 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/212137/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 ssh "sudo cat /etc/test/nested/copy/212137/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/212137.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 ssh "sudo cat /etc/ssl/certs/212137.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/212137.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 ssh "sudo cat /usr/share/ca-certificates/212137.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2121372.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 ssh "sudo cat /etc/ssl/certs/2121372.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2121372.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 ssh "sudo cat /usr/share/ca-certificates/2121372.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-383702 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-383702 ssh "sudo systemctl is-active docker": exit status 1 (264.04052ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-383702 ssh "sudo systemctl is-active containerd": exit status 1 (258.046852ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-383702 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-383702 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-383702 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-383702 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 248683: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-383702 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-383702 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [5c64c4a4-4ad5-42c3-bd52-607ad482028c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [5c64c4a4-4ad5-42c3-bd52-607ad482028c] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003421422s
I0926 22:38:54.119315  212137 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-383702 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.98.198.196 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-383702 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "321.986225ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "51.307395ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "324.669874ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "50.131002ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-383702 /tmp/TestFunctionalparallelMountCmdany-port3812566940/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1758926337311866218" to /tmp/TestFunctionalparallelMountCmdany-port3812566940/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1758926337311866218" to /tmp/TestFunctionalparallelMountCmdany-port3812566940/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1758926337311866218" to /tmp/TestFunctionalparallelMountCmdany-port3812566940/001/test-1758926337311866218
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-383702 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (265.063663ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0926 22:38:57.577229  212137 retry.go:31] will retry after 564.112216ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 26 22:38 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 26 22:38 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 26 22:38 test-1758926337311866218
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 ssh cat /mount-9p/test-1758926337311866218
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-383702 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [57e1f1f8-faa0-453a-8abe-678af8b2480f] Pending
helpers_test.go:352: "busybox-mount" [57e1f1f8-faa0-453a-8abe-678af8b2480f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [57e1f1f8-faa0-453a-8abe-678af8b2480f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [57e1f1f8-faa0-453a-8abe-678af8b2480f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003774056s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-383702 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-383702 /tmp/TestFunctionalparallelMountCmdany-port3812566940/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.70s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-383702 /tmp/TestFunctionalparallelMountCmdspecific-port1082812641/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-383702 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (271.137708ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0926 22:39:05.283738  212137 retry.go:31] will retry after 592.891642ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-383702 /tmp/TestFunctionalparallelMountCmdspecific-port1082812641/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-383702 ssh "sudo umount -f /mount-9p": exit status 1 (261.833522ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-383702 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-383702 /tmp/TestFunctionalparallelMountCmdspecific-port1082812641/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-383702 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2939563306/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-383702 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2939563306/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-383702 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2939563306/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-383702 ssh "findmnt -T" /mount1: exit status 1 (307.244162ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0926 22:39:07.182077  212137 retry.go:31] will retry after 729.417972ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-383702 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-383702 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2939563306/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-383702 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2939563306/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-383702 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2939563306/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-383702 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-383702
localhost/kicbase/echo-server:functional-383702
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-383702 image ls --format short --alsologtostderr:
I0926 22:44:11.793042  257176 out.go:360] Setting OutFile to fd 1 ...
I0926 22:44:11.793331  257176 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:44:11.793341  257176 out.go:374] Setting ErrFile to fd 2...
I0926 22:44:11.793348  257176 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:44:11.793527  257176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-208519/.minikube/bin
I0926 22:44:11.794190  257176 config.go:182] Loaded profile config "functional-383702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0926 22:44:11.794316  257176 config.go:182] Loaded profile config "functional-383702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0926 22:44:11.794755  257176 cli_runner.go:164] Run: docker container inspect functional-383702 --format={{.State.Status}}
I0926 22:44:11.812314  257176 ssh_runner.go:195] Run: systemctl --version
I0926 22:44:11.812373  257176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-383702
I0926 22:44:11.829704  257176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/functional-383702/id_rsa Username:docker}
I0926 22:44:11.921970  257176 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-383702 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/library/nginx                 │ latest             │ 41f689c209100 │ 197MB  │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-scheduler          │ v1.34.0            │ 46169d968e920 │ 53.8MB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ localhost/kicbase/echo-server           │ functional-383702  │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/kube-proxy              │ v1.34.0            │ df0860106674d │ 73.1MB │
│ localhost/minikube-local-cache-test     │ functional-383702  │ b0dbb7d07d23f │ 3.33kB │
│ registry.k8s.io/kube-apiserver          │ v1.34.0            │ 90550c43ad2bc │ 89.1MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.0            │ a0af72f2ec6d6 │ 76MB   │
│ docker.io/library/nginx                 │ alpine             │ 4a86014ec6994 │ 53.9MB │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/my-image                      │ functional-383702  │ ecac36ab86ac4 │ 1.47MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-383702 image ls --format table --alsologtostderr:
I0926 22:44:15.341814  257748 out.go:360] Setting OutFile to fd 1 ...
I0926 22:44:15.342110  257748 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:44:15.342120  257748 out.go:374] Setting ErrFile to fd 2...
I0926 22:44:15.342128  257748 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:44:15.342338  257748 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-208519/.minikube/bin
I0926 22:44:15.342987  257748 config.go:182] Loaded profile config "functional-383702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0926 22:44:15.343152  257748 config.go:182] Loaded profile config "functional-383702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0926 22:44:15.343594  257748 cli_runner.go:164] Run: docker container inspect functional-383702 --format={{.State.Status}}
I0926 22:44:15.361993  257748 ssh_runner.go:195] Run: systemctl --version
I0926 22:44:15.362063  257748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-383702
I0926 22:44:15.379981  257748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/functional-383702/id_rsa Username:docker}
I0926 22:44:15.473311  257748 ssh_runner.go:195] Run: sudo crictl images --output json
E0926 22:47:12.143223  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-383702 image ls --format json --alsologtostderr:
[{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"ecac36ab86ac4b9cadc1986301235684a80ba97f525aab504d4a0650f0e5ddde","repoDigests":["localhost/my-image@sha256:5773753fd988982493d23cf3427a323baa1236f3f53debe5a1fe0e31349b5f1e"],"repoTags":["localhost/my-image:functional-383702"],"size":"1468194"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9","repoDigests":["docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8a
cbd5d13fe08fecadc7ed98605ba5e3b26ab8","docker.io/library/nginx@sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a"],"repoTags":["docker.io/library/nginx:alpine"],"size":"53949946"},{"id":"41f689c209100e6cadf3ce7fdd02035e90dbd1d586716bf8fc6ea55c365b2d81","repoDigests":["docker.io/library/nginx@sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285","docker.io/library/nginx@sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e"],"repoTags":["docker.io/library/nginx:latest"],"size":"196550530"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"b0dbb7d07d23ff2719d6a0d4eab376e41e2dc77a91ec3a7a2b4eb713f06469ee","repoDigests":["localhost/minikube-lo
cal-cache-test@sha256:4a4f47b9aec39b6c4092d4ae53596bdf6aac1476e583d2bad77585133af3dad3"],"repoTags":["localhost/minikube-local-cache-test:functional-383702"],"size":"3330"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce","repoDigests":["registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067","registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"73138071"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha25
6:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"00fd51d38a144b1ef25bac40a9a2700e978b256eff9a0595e3a04d33b506c506","repoDigests":["docker.io/library/0c4857112d2281297bf13a8838bc49599f14120b7dd7aaa8aca0b89a76989f27-tmp@sha256:6ba38bdb10725a9dfc0d7b209d5e1ffa9cc552a00afb919fa600c170ac2b6797"],"repoTags":[],"size":"1465612"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","repoDigests":["registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86","registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8
efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"89050097"},{"id":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6","registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"76004183"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd668
22659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTa
gs":["localhost/kicbase/echo-server:functional-383702"],"size":"4943877"},{"id":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140","registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"53844823"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-383702 image ls --format json --alsologtostderr:
I0926 22:44:15.120811  257697 out.go:360] Setting OutFile to fd 1 ...
I0926 22:44:15.121066  257697 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:44:15.121075  257697 out.go:374] Setting ErrFile to fd 2...
I0926 22:44:15.121078  257697 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:44:15.121303  257697 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-208519/.minikube/bin
I0926 22:44:15.121839  257697 config.go:182] Loaded profile config "functional-383702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0926 22:44:15.121937  257697 config.go:182] Loaded profile config "functional-383702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0926 22:44:15.122288  257697 cli_runner.go:164] Run: docker container inspect functional-383702 --format={{.State.Status}}
I0926 22:44:15.139677  257697 ssh_runner.go:195] Run: systemctl --version
I0926 22:44:15.139733  257697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-383702
I0926 22:44:15.157259  257697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/functional-383702/id_rsa Username:docker}
I0926 22:44:15.251315  257697 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-383702 image ls --format yaml --alsologtostderr:
- id: 46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140
- registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "53844823"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: b0dbb7d07d23ff2719d6a0d4eab376e41e2dc77a91ec3a7a2b4eb713f06469ee
repoDigests:
- localhost/minikube-local-cache-test@sha256:4a4f47b9aec39b6c4092d4ae53596bdf6aac1476e583d2bad77585133af3dad3
repoTags:
- localhost/minikube-local-cache-test:functional-383702
size: "3330"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9
repoDigests:
- docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8
- docker.io/library/nginx@sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a
repoTags:
- docker.io/library/nginx:alpine
size: "53949946"
- id: 41f689c209100e6cadf3ce7fdd02035e90dbd1d586716bf8fc6ea55c365b2d81
repoDigests:
- docker.io/library/nginx@sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285
- docker.io/library/nginx@sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e
repoTags:
- docker.io/library/nginx:latest
size: "196550530"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86
- registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "89050097"
- id: df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce
repoDigests:
- registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067
- registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "73138071"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-383702
size: "4943877"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6
- registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "76004183"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-383702 image ls --format yaml --alsologtostderr:
I0926 22:44:12.016197  257226 out.go:360] Setting OutFile to fd 1 ...
I0926 22:44:12.016328  257226 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:44:12.016339  257226 out.go:374] Setting ErrFile to fd 2...
I0926 22:44:12.016346  257226 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:44:12.016576  257226 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-208519/.minikube/bin
I0926 22:44:12.017195  257226 config.go:182] Loaded profile config "functional-383702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0926 22:44:12.017295  257226 config.go:182] Loaded profile config "functional-383702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0926 22:44:12.017673  257226 cli_runner.go:164] Run: docker container inspect functional-383702 --format={{.State.Status}}
I0926 22:44:12.034969  257226 ssh_runner.go:195] Run: systemctl --version
I0926 22:44:12.035021  257226 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-383702
I0926 22:44:12.051554  257226 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/functional-383702/id_rsa Username:docker}
I0926 22:44:12.144206  257226 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-383702 ssh pgrep buildkitd: exit status 1 (256.073861ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 image build -t localhost/my-image:functional-383702 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-383702 image build -t localhost/my-image:functional-383702 testdata/build --alsologtostderr: (2.40864076s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-383702 image build -t localhost/my-image:functional-383702 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 00fd51d38a1
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-383702
--> ecac36ab86a
Successfully tagged localhost/my-image:functional-383702
ecac36ab86ac4b9cadc1986301235684a80ba97f525aab504d4a0650f0e5ddde
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-383702 image build -t localhost/my-image:functional-383702 testdata/build --alsologtostderr:
I0926 22:44:12.489542  257374 out.go:360] Setting OutFile to fd 1 ...
I0926 22:44:12.490487  257374 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:44:12.490501  257374 out.go:374] Setting ErrFile to fd 2...
I0926 22:44:12.490508  257374 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0926 22:44:12.490727  257374 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-208519/.minikube/bin
I0926 22:44:12.491344  257374 config.go:182] Loaded profile config "functional-383702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0926 22:44:12.492056  257374 config.go:182] Loaded profile config "functional-383702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0926 22:44:12.492474  257374 cli_runner.go:164] Run: docker container inspect functional-383702 --format={{.State.Status}}
I0926 22:44:12.511143  257374 ssh_runner.go:195] Run: systemctl --version
I0926 22:44:12.511199  257374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-383702
I0926 22:44:12.528263  257374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/functional-383702/id_rsa Username:docker}
I0926 22:44:12.621197  257374 build_images.go:161] Building image from path: /tmp/build.2249346742.tar
I0926 22:44:12.621293  257374 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0926 22:44:12.631205  257374 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2249346742.tar
I0926 22:44:12.634818  257374 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2249346742.tar: stat -c "%s %y" /var/lib/minikube/build/build.2249346742.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2249346742.tar': No such file or directory
I0926 22:44:12.634845  257374 ssh_runner.go:362] scp /tmp/build.2249346742.tar --> /var/lib/minikube/build/build.2249346742.tar (3072 bytes)
I0926 22:44:12.660169  257374 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2249346742
I0926 22:44:12.669657  257374 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2249346742 -xf /var/lib/minikube/build/build.2249346742.tar
I0926 22:44:12.679019  257374 crio.go:315] Building image: /var/lib/minikube/build/build.2249346742
I0926 22:44:12.679075  257374 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-383702 /var/lib/minikube/build/build.2249346742 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0926 22:44:14.827508  257374 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-383702 /var/lib/minikube/build/build.2249346742 --cgroup-manager=cgroupfs: (2.148402696s)
I0926 22:44:14.827581  257374 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2249346742
I0926 22:44:14.837042  257374 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2249346742.tar
I0926 22:44:14.846130  257374 build_images.go:217] Built localhost/my-image:functional-383702 from /tmp/build.2249346742.tar
I0926 22:44:14.846160  257374 build_images.go:133] succeeded building to: functional-383702
I0926 22:44:14.846164  257374 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-383702
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 image load --daemon kicbase/echo-server:functional-383702 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-383702 image load --daemon kicbase/echo-server:functional-383702 --alsologtostderr: (1.064319707s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 image load --daemon kicbase/echo-server:functional-383702 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-383702
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 image load --daemon kicbase/echo-server:functional-383702 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 image save kicbase/echo-server:functional-383702 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 image rm kicbase/echo-server:functional-383702 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-383702
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 image save --daemon kicbase/echo-server:functional-383702 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-383702
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-383702 service list: (1.696881257s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-383702 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-383702 service list -o json: (1.710064615s)
functional_test.go:1504: Took "1.710179081s" to run "out/minikube-linux-amd64 -p functional-383702 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.71s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-383702
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-383702
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-383702
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (135.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-713371 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m14.965097918s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (135.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-713371 kubectl -- rollout status deployment/busybox: (3.642608073s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 kubectl -- exec busybox-7b57f96db7-rn5z9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 kubectl -- exec busybox-7b57f96db7-rp9bk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 kubectl -- exec busybox-7b57f96db7-s95wc -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 kubectl -- exec busybox-7b57f96db7-rn5z9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 kubectl -- exec busybox-7b57f96db7-rp9bk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 kubectl -- exec busybox-7b57f96db7-s95wc -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 kubectl -- exec busybox-7b57f96db7-rn5z9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 kubectl -- exec busybox-7b57f96db7-rp9bk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 kubectl -- exec busybox-7b57f96db7-s95wc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 kubectl -- exec busybox-7b57f96db7-rn5z9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 kubectl -- exec busybox-7b57f96db7-rn5z9 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 kubectl -- exec busybox-7b57f96db7-rp9bk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 kubectl -- exec busybox-7b57f96db7-rp9bk -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 kubectl -- exec busybox-7b57f96db7-s95wc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 kubectl -- exec busybox-7b57f96db7-s95wc -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (54.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 node add --alsologtostderr -v 5
E0926 22:52:12.143124  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-713371 node add --alsologtostderr -v 5: (53.230173971s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (54.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-713371 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 cp testdata/cp-test.txt ha-713371:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 ssh -n ha-713371 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 cp ha-713371:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4240570321/001/cp-test_ha-713371.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 ssh -n ha-713371 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 cp ha-713371:/home/docker/cp-test.txt ha-713371-m02:/home/docker/cp-test_ha-713371_ha-713371-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 ssh -n ha-713371 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 ssh -n ha-713371-m02 "sudo cat /home/docker/cp-test_ha-713371_ha-713371-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 cp ha-713371:/home/docker/cp-test.txt ha-713371-m03:/home/docker/cp-test_ha-713371_ha-713371-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 ssh -n ha-713371 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 ssh -n ha-713371-m03 "sudo cat /home/docker/cp-test_ha-713371_ha-713371-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 cp ha-713371:/home/docker/cp-test.txt ha-713371-m04:/home/docker/cp-test_ha-713371_ha-713371-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 ssh -n ha-713371 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 ssh -n ha-713371-m04 "sudo cat /home/docker/cp-test_ha-713371_ha-713371-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 cp testdata/cp-test.txt ha-713371-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 ssh -n ha-713371-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 cp ha-713371-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4240570321/001/cp-test_ha-713371-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 ssh -n ha-713371-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 cp ha-713371-m02:/home/docker/cp-test.txt ha-713371:/home/docker/cp-test_ha-713371-m02_ha-713371.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 ssh -n ha-713371-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 ssh -n ha-713371 "sudo cat /home/docker/cp-test_ha-713371-m02_ha-713371.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 cp ha-713371-m02:/home/docker/cp-test.txt ha-713371-m03:/home/docker/cp-test_ha-713371-m02_ha-713371-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 ssh -n ha-713371-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 ssh -n ha-713371-m03 "sudo cat /home/docker/cp-test_ha-713371-m02_ha-713371-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 cp ha-713371-m02:/home/docker/cp-test.txt ha-713371-m04:/home/docker/cp-test_ha-713371-m02_ha-713371-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 ssh -n ha-713371-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 ssh -n ha-713371-m04 "sudo cat /home/docker/cp-test_ha-713371-m02_ha-713371-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 cp testdata/cp-test.txt ha-713371-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 ssh -n ha-713371-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 cp ha-713371-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4240570321/001/cp-test_ha-713371-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 ssh -n ha-713371-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 cp ha-713371-m03:/home/docker/cp-test.txt ha-713371:/home/docker/cp-test_ha-713371-m03_ha-713371.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 ssh -n ha-713371-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 ssh -n ha-713371 "sudo cat /home/docker/cp-test_ha-713371-m03_ha-713371.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 cp ha-713371-m03:/home/docker/cp-test.txt ha-713371-m02:/home/docker/cp-test_ha-713371-m03_ha-713371-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 ssh -n ha-713371-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 ssh -n ha-713371-m02 "sudo cat /home/docker/cp-test_ha-713371-m03_ha-713371-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 cp ha-713371-m03:/home/docker/cp-test.txt ha-713371-m04:/home/docker/cp-test_ha-713371-m03_ha-713371-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 ssh -n ha-713371-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 ssh -n ha-713371-m04 "sudo cat /home/docker/cp-test_ha-713371-m03_ha-713371-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 cp testdata/cp-test.txt ha-713371-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 ssh -n ha-713371-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 cp ha-713371-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4240570321/001/cp-test_ha-713371-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 ssh -n ha-713371-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 cp ha-713371-m04:/home/docker/cp-test.txt ha-713371:/home/docker/cp-test_ha-713371-m04_ha-713371.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 ssh -n ha-713371-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 ssh -n ha-713371 "sudo cat /home/docker/cp-test_ha-713371-m04_ha-713371.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 cp ha-713371-m04:/home/docker/cp-test.txt ha-713371-m02:/home/docker/cp-test_ha-713371-m04_ha-713371-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 ssh -n ha-713371-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 ssh -n ha-713371-m02 "sudo cat /home/docker/cp-test_ha-713371-m04_ha-713371-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 cp ha-713371-m04:/home/docker/cp-test.txt ha-713371-m03:/home/docker/cp-test_ha-713371-m04_ha-713371-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 ssh -n ha-713371-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 ssh -n ha-713371-m03 "sudo cat /home/docker/cp-test_ha-713371-m04_ha-713371-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (19.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-713371 node stop m02 --alsologtostderr -v 5: (19.096178957s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-713371 status --alsologtostderr -v 5: exit status 7 (681.549069ms)

                                                
                                                
-- stdout --
	ha-713371
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-713371-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-713371-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-713371-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 22:53:13.638859  282781 out.go:360] Setting OutFile to fd 1 ...
	I0926 22:53:13.639177  282781 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:53:13.639187  282781 out.go:374] Setting ErrFile to fd 2...
	I0926 22:53:13.639192  282781 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:53:13.639378  282781 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-208519/.minikube/bin
	I0926 22:53:13.639554  282781 out.go:368] Setting JSON to false
	I0926 22:53:13.639598  282781 mustload.go:65] Loading cluster: ha-713371
	I0926 22:53:13.639728  282781 notify.go:220] Checking for updates...
	I0926 22:53:13.639995  282781 config.go:182] Loaded profile config "ha-713371": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 22:53:13.640011  282781 status.go:174] checking status of ha-713371 ...
	I0926 22:53:13.640549  282781 cli_runner.go:164] Run: docker container inspect ha-713371 --format={{.State.Status}}
	I0926 22:53:13.662351  282781 status.go:371] ha-713371 host status = "Running" (err=<nil>)
	I0926 22:53:13.662374  282781 host.go:66] Checking if "ha-713371" exists ...
	I0926 22:53:13.662628  282781 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-713371
	I0926 22:53:13.680981  282781 host.go:66] Checking if "ha-713371" exists ...
	I0926 22:53:13.681274  282781 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0926 22:53:13.681317  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-713371
	I0926 22:53:13.699107  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/ha-713371/id_rsa Username:docker}
	I0926 22:53:13.792647  282781 ssh_runner.go:195] Run: systemctl --version
	I0926 22:53:13.796986  282781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 22:53:13.810150  282781 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 22:53:13.866502  282781 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-26 22:53:13.855426808 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 22:53:13.867073  282781 kubeconfig.go:125] found "ha-713371" server: "https://192.168.49.254:8443"
	I0926 22:53:13.867122  282781 api_server.go:166] Checking apiserver status ...
	I0926 22:53:13.867169  282781 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 22:53:13.879496  282781 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1434/cgroup
	W0926 22:53:13.889310  282781 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1434/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0926 22:53:13.889352  282781 ssh_runner.go:195] Run: ls
	I0926 22:53:13.893188  282781 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0926 22:53:13.899852  282781 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0926 22:53:13.899874  282781 status.go:463] ha-713371 apiserver status = Running (err=<nil>)
	I0926 22:53:13.899885  282781 status.go:176] ha-713371 status: &{Name:ha-713371 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0926 22:53:13.899903  282781 status.go:174] checking status of ha-713371-m02 ...
	I0926 22:53:13.900150  282781 cli_runner.go:164] Run: docker container inspect ha-713371-m02 --format={{.State.Status}}
	I0926 22:53:13.917994  282781 status.go:371] ha-713371-m02 host status = "Stopped" (err=<nil>)
	I0926 22:53:13.918017  282781 status.go:384] host is not running, skipping remaining checks
	I0926 22:53:13.918025  282781 status.go:176] ha-713371-m02 status: &{Name:ha-713371-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0926 22:53:13.918046  282781 status.go:174] checking status of ha-713371-m03 ...
	I0926 22:53:13.918324  282781 cli_runner.go:164] Run: docker container inspect ha-713371-m03 --format={{.State.Status}}
	I0926 22:53:13.936447  282781 status.go:371] ha-713371-m03 host status = "Running" (err=<nil>)
	I0926 22:53:13.936469  282781 host.go:66] Checking if "ha-713371-m03" exists ...
	I0926 22:53:13.936719  282781 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-713371-m03
	I0926 22:53:13.955539  282781 host.go:66] Checking if "ha-713371-m03" exists ...
	I0926 22:53:13.955834  282781 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0926 22:53:13.955871  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-713371-m03
	I0926 22:53:13.974640  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/ha-713371-m03/id_rsa Username:docker}
	I0926 22:53:14.067626  282781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 22:53:14.080470  282781 kubeconfig.go:125] found "ha-713371" server: "https://192.168.49.254:8443"
	I0926 22:53:14.080499  282781 api_server.go:166] Checking apiserver status ...
	I0926 22:53:14.080539  282781 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 22:53:14.092060  282781 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1349/cgroup
	W0926 22:53:14.101709  282781 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1349/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0926 22:53:14.101753  282781 ssh_runner.go:195] Run: ls
	I0926 22:53:14.105511  282781 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0926 22:53:14.109749  282781 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0926 22:53:14.109775  282781 status.go:463] ha-713371-m03 apiserver status = Running (err=<nil>)
	I0926 22:53:14.109784  282781 status.go:176] ha-713371-m03 status: &{Name:ha-713371-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0926 22:53:14.109800  282781 status.go:174] checking status of ha-713371-m04 ...
	I0926 22:53:14.110041  282781 cli_runner.go:164] Run: docker container inspect ha-713371-m04 --format={{.State.Status}}
	I0926 22:53:14.128657  282781 status.go:371] ha-713371-m04 host status = "Running" (err=<nil>)
	I0926 22:53:14.128681  282781 host.go:66] Checking if "ha-713371-m04" exists ...
	I0926 22:53:14.128976  282781 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-713371-m04
	I0926 22:53:14.147270  282781 host.go:66] Checking if "ha-713371-m04" exists ...
	I0926 22:53:14.147547  282781 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0926 22:53:14.147587  282781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-713371-m04
	I0926 22:53:14.164387  282781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/ha-713371-m04/id_rsa Username:docker}
	I0926 22:53:14.258484  282781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 22:53:14.270437  282781 status.go:176] ha-713371-m04 status: &{Name:ha-713371-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (19.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-713371 node start m02 --alsologtostderr -v 5: (7.910403369s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (111.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 stop --alsologtostderr -v 5
E0926 22:53:35.217391  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:53:43.694173  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/functional-383702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:53:43.700556  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/functional-383702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:53:43.711884  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/functional-383702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:53:43.733170  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/functional-383702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:53:43.774546  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/functional-383702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:53:43.855991  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/functional-383702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:53:44.017389  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/functional-383702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:53:44.339102  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/functional-383702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:53:44.981146  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/functional-383702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:53:46.263009  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/functional-383702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:53:48.825697  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/functional-383702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:53:53.947253  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/functional-383702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:54:04.188668  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/functional-383702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-713371 stop --alsologtostderr -v 5: (51.003110419s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 start --wait true --alsologtostderr -v 5
E0926 22:54:24.670266  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/functional-383702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:55:05.632216  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/functional-383702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-713371 start --wait true --alsologtostderr -v 5: (1m0.732400153s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (111.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-713371 node delete m03 --alsologtostderr -v 5: (10.533768728s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (41.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-713371 stop --alsologtostderr -v 5: (41.080101616s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-713371 status --alsologtostderr -v 5: exit status 7 (107.211078ms)

                                                
                                                
-- stdout --
	ha-713371
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-713371-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-713371-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 22:56:09.693908  299117 out.go:360] Setting OutFile to fd 1 ...
	I0926 22:56:09.694202  299117 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:56:09.694214  299117 out.go:374] Setting ErrFile to fd 2...
	I0926 22:56:09.694220  299117 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 22:56:09.694456  299117 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-208519/.minikube/bin
	I0926 22:56:09.694729  299117 out.go:368] Setting JSON to false
	I0926 22:56:09.694784  299117 mustload.go:65] Loading cluster: ha-713371
	I0926 22:56:09.694891  299117 notify.go:220] Checking for updates...
	I0926 22:56:09.695245  299117 config.go:182] Loaded profile config "ha-713371": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 22:56:09.695265  299117 status.go:174] checking status of ha-713371 ...
	I0926 22:56:09.695755  299117 cli_runner.go:164] Run: docker container inspect ha-713371 --format={{.State.Status}}
	I0926 22:56:09.715624  299117 status.go:371] ha-713371 host status = "Stopped" (err=<nil>)
	I0926 22:56:09.715648  299117 status.go:384] host is not running, skipping remaining checks
	I0926 22:56:09.715656  299117 status.go:176] ha-713371 status: &{Name:ha-713371 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0926 22:56:09.715687  299117 status.go:174] checking status of ha-713371-m02 ...
	I0926 22:56:09.715926  299117 cli_runner.go:164] Run: docker container inspect ha-713371-m02 --format={{.State.Status}}
	I0926 22:56:09.733685  299117 status.go:371] ha-713371-m02 host status = "Stopped" (err=<nil>)
	I0926 22:56:09.733728  299117 status.go:384] host is not running, skipping remaining checks
	I0926 22:56:09.733742  299117 status.go:176] ha-713371-m02 status: &{Name:ha-713371-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0926 22:56:09.733774  299117 status.go:174] checking status of ha-713371-m04 ...
	I0926 22:56:09.734063  299117 cli_runner.go:164] Run: docker container inspect ha-713371-m04 --format={{.State.Status}}
	I0926 22:56:09.751021  299117 status.go:371] ha-713371-m04 host status = "Stopped" (err=<nil>)
	I0926 22:56:09.751058  299117 status.go:384] host is not running, skipping remaining checks
	I0926 22:56:09.751068  299117 status.go:176] ha-713371-m04 status: &{Name:ha-713371-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (41.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (58.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0926 22:56:27.555330  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/functional-383702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-713371 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (57.55588251s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (58.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (66.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 node add --control-plane --alsologtostderr -v 5
E0926 22:57:12.144303  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-713371 node add --control-plane --alsologtostderr -v 5: (1m6.014972875s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-713371 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (66.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                    
x
+
TestJSONOutput/start/Command (69.55s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-645794 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E0926 22:58:43.693777  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/functional-383702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 22:59:11.404021  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/functional-383702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-645794 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m9.550662136s)
--- PASS: TestJSONOutput/start/Command (69.55s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-645794 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-645794 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.09s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-645794 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-645794 --output=json --user=testUser: (6.089519534s)
--- PASS: TestJSONOutput/stop/Command (6.09s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-617272 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-617272 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (63.238675ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2f9e0fe8-301d-47ea-b40e-ac1fc2910080","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-617272] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"704ec0dd-f252-448b-8e74-acc0b53276c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21642"}}
	{"specversion":"1.0","id":"44ae0957-e2fd-4ac8-b130-78747b398658","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b90f2948-464d-45f5-8be3-899b39ef4eef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21642-208519/kubeconfig"}}
	{"specversion":"1.0","id":"032e4311-bd63-416f-a2f4-50f756b07de9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-208519/.minikube"}}
	{"specversion":"1.0","id":"465d2a9f-076d-4ac1-bcab-7d3ab1c47c8d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"4065e220-65aa-4aaf-b5ea-92ef0b14f5e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8f6841ac-7748-46e8-a4d8-ca6c436cea60","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-617272" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-617272
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (31.08s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-370869 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-370869 --network=: (28.960000471s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-370869" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-370869
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-370869: (2.101448645s)
--- PASS: TestKicCustomNetwork/create_custom_network (31.08s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.5s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-627896 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-627896 --network=bridge: (21.541274756s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-627896" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-627896
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-627896: (1.938171864s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.50s)

                                                
                                    
x
+
TestKicExistingNetwork (24.22s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0926 23:00:40.999208  212137 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0926 23:00:41.017404  212137 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0926 23:00:41.017503  212137 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0926 23:00:41.017526  212137 cli_runner.go:164] Run: docker network inspect existing-network
W0926 23:00:41.034118  212137 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0926 23:00:41.034152  212137 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0926 23:00:41.034165  212137 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0926 23:00:41.034316  212137 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0926 23:00:41.051849  212137 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-61b47db54300 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:16:5a:0f:e5:da:60} reservation:<nil>}
I0926 23:00:41.052269  212137 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e25170}
I0926 23:00:41.052306  212137 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0926 23:00:41.052364  212137 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0926 23:00:41.107571  212137 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-598011 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-598011 --network=existing-network: (22.14972399s)
helpers_test.go:175: Cleaning up "existing-network-598011" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-598011
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-598011: (1.930439556s)
I0926 23:01:05.204924  212137 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (24.22s)

                                                
                                    
x
+
TestKicCustomSubnet (25.08s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-441436 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-441436 --subnet=192.168.60.0/24: (22.978965551s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-441436 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-441436" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-441436
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-441436: (2.08189228s)
--- PASS: TestKicCustomSubnet (25.08s)

                                                
                                    
x
+
TestKicStaticIP (23.56s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-104059 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-104059 --static-ip=192.168.200.200: (21.321350422s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-104059 ip
helpers_test.go:175: Cleaning up "static-ip-104059" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-104059
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-104059: (2.100058791s)
--- PASS: TestKicStaticIP (23.56s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (47.95s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-009835 --driver=docker  --container-runtime=crio
E0926 23:02:12.150898  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-009835 --driver=docker  --container-runtime=crio: (20.438591846s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-022831 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-022831 --driver=docker  --container-runtime=crio: (21.646505324s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-009835
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-022831
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-022831" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-022831
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-022831: (2.331940467s)
helpers_test.go:175: Cleaning up "first-009835" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-009835
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-009835: (2.341768626s)
--- PASS: TestMinikubeProfile (47.95s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.2s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-348391 --memory=3072 --mount-string /tmp/TestMountStartserial1563124712/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-348391 --memory=3072 --mount-string /tmp/TestMountStartserial1563124712/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.196560944s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-348391 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.59s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-362626 --memory=3072 --mount-string /tmp/TestMountStartserial1563124712/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-362626 --memory=3072 --mount-string /tmp/TestMountStartserial1563124712/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.587767247s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-362626 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-348391 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-348391 --alsologtostderr -v=5: (1.644813446s)
--- PASS: TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-362626 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-362626
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-362626: (1.18647942s)
--- PASS: TestMountStart/serial/Stop (1.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.28s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-362626
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-362626: (6.276002931s)
--- PASS: TestMountStart/serial/RestartStopped (7.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-362626 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (94.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-460778 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E0926 23:03:43.694388  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/functional-383702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-460778 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m34.11623335s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-460778 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (94.59s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-460778 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-460778 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-460778 -- rollout status deployment/busybox: (3.511258378s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-460778 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-460778 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-460778 -- exec busybox-7b57f96db7-6p6g6 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-460778 -- exec busybox-7b57f96db7-xfjfl -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-460778 -- exec busybox-7b57f96db7-6p6g6 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-460778 -- exec busybox-7b57f96db7-xfjfl -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-460778 -- exec busybox-7b57f96db7-6p6g6 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-460778 -- exec busybox-7b57f96db7-xfjfl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.93s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-460778 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-460778 -- exec busybox-7b57f96db7-6p6g6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-460778 -- exec busybox-7b57f96db7-6p6g6 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-460778 -- exec busybox-7b57f96db7-xfjfl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-460778 -- exec busybox-7b57f96db7-xfjfl -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (53.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-460778 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-460778 -v=5 --alsologtostderr: (53.234182374s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-460778 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (53.86s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-460778 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.65s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-460778 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-460778 cp testdata/cp-test.txt multinode-460778:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-460778 ssh -n multinode-460778 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-460778 cp multinode-460778:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2020046927/001/cp-test_multinode-460778.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-460778 ssh -n multinode-460778 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-460778 cp multinode-460778:/home/docker/cp-test.txt multinode-460778-m02:/home/docker/cp-test_multinode-460778_multinode-460778-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-460778 ssh -n multinode-460778 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-460778 ssh -n multinode-460778-m02 "sudo cat /home/docker/cp-test_multinode-460778_multinode-460778-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-460778 cp multinode-460778:/home/docker/cp-test.txt multinode-460778-m03:/home/docker/cp-test_multinode-460778_multinode-460778-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-460778 ssh -n multinode-460778 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-460778 ssh -n multinode-460778-m03 "sudo cat /home/docker/cp-test_multinode-460778_multinode-460778-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-460778 cp testdata/cp-test.txt multinode-460778-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-460778 ssh -n multinode-460778-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-460778 cp multinode-460778-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2020046927/001/cp-test_multinode-460778-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-460778 ssh -n multinode-460778-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-460778 cp multinode-460778-m02:/home/docker/cp-test.txt multinode-460778:/home/docker/cp-test_multinode-460778-m02_multinode-460778.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-460778 ssh -n multinode-460778-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-460778 ssh -n multinode-460778 "sudo cat /home/docker/cp-test_multinode-460778-m02_multinode-460778.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-460778 cp multinode-460778-m02:/home/docker/cp-test.txt multinode-460778-m03:/home/docker/cp-test_multinode-460778-m02_multinode-460778-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-460778 ssh -n multinode-460778-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-460778 ssh -n multinode-460778-m03 "sudo cat /home/docker/cp-test_multinode-460778-m02_multinode-460778-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-460778 cp testdata/cp-test.txt multinode-460778-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-460778 ssh -n multinode-460778-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-460778 cp multinode-460778-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2020046927/001/cp-test_multinode-460778-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-460778 ssh -n multinode-460778-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-460778 cp multinode-460778-m03:/home/docker/cp-test.txt multinode-460778:/home/docker/cp-test_multinode-460778-m03_multinode-460778.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-460778 ssh -n multinode-460778-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-460778 ssh -n multinode-460778 "sudo cat /home/docker/cp-test_multinode-460778-m03_multinode-460778.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-460778 cp multinode-460778-m03:/home/docker/cp-test.txt multinode-460778-m02:/home/docker/cp-test_multinode-460778-m03_multinode-460778-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-460778 ssh -n multinode-460778-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-460778 ssh -n multinode-460778-m02 "sudo cat /home/docker/cp-test_multinode-460778-m03_multinode-460778-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.41s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-460778 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-460778 node stop m03: (1.295136508s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-460778 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-460778 status: exit status 7 (470.244906ms)

                                                
                                                
-- stdout --
	multinode-460778
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-460778-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-460778-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-460778 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-460778 status --alsologtostderr: exit status 7 (477.105311ms)

                                                
                                                
-- stdout --
	multinode-460778
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-460778-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-460778-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 23:05:51.630607  362179 out.go:360] Setting OutFile to fd 1 ...
	I0926 23:05:51.630890  362179 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:05:51.630901  362179 out.go:374] Setting ErrFile to fd 2...
	I0926 23:05:51.630905  362179 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:05:51.631132  362179 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-208519/.minikube/bin
	I0926 23:05:51.631302  362179 out.go:368] Setting JSON to false
	I0926 23:05:51.631349  362179 mustload.go:65] Loading cluster: multinode-460778
	I0926 23:05:51.631456  362179 notify.go:220] Checking for updates...
	I0926 23:05:51.631911  362179 config.go:182] Loaded profile config "multinode-460778": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:05:51.631931  362179 status.go:174] checking status of multinode-460778 ...
	I0926 23:05:51.632458  362179 cli_runner.go:164] Run: docker container inspect multinode-460778 --format={{.State.Status}}
	I0926 23:05:51.650748  362179 status.go:371] multinode-460778 host status = "Running" (err=<nil>)
	I0926 23:05:51.650776  362179 host.go:66] Checking if "multinode-460778" exists ...
	I0926 23:05:51.651049  362179 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-460778
	I0926 23:05:51.668623  362179 host.go:66] Checking if "multinode-460778" exists ...
	I0926 23:05:51.668869  362179 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0926 23:05:51.668923  362179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-460778
	I0926 23:05:51.685928  362179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/multinode-460778/id_rsa Username:docker}
	I0926 23:05:51.779630  362179 ssh_runner.go:195] Run: systemctl --version
	I0926 23:05:51.784151  362179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 23:05:51.795632  362179 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 23:05:51.853544  362179 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-26 23:05:51.843784792 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 23:05:51.854099  362179 kubeconfig.go:125] found "multinode-460778" server: "https://192.168.67.2:8443"
	I0926 23:05:51.854135  362179 api_server.go:166] Checking apiserver status ...
	I0926 23:05:51.854176  362179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 23:05:51.866197  362179 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1436/cgroup
	W0926 23:05:51.876285  362179 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1436/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0926 23:05:51.876343  362179 ssh_runner.go:195] Run: ls
	I0926 23:05:51.879933  362179 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0926 23:05:51.884003  362179 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0926 23:05:51.884023  362179 status.go:463] multinode-460778 apiserver status = Running (err=<nil>)
	I0926 23:05:51.884040  362179 status.go:176] multinode-460778 status: &{Name:multinode-460778 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0926 23:05:51.884057  362179 status.go:174] checking status of multinode-460778-m02 ...
	I0926 23:05:51.884306  362179 cli_runner.go:164] Run: docker container inspect multinode-460778-m02 --format={{.State.Status}}
	I0926 23:05:51.901320  362179 status.go:371] multinode-460778-m02 host status = "Running" (err=<nil>)
	I0926 23:05:51.901345  362179 host.go:66] Checking if "multinode-460778-m02" exists ...
	I0926 23:05:51.901592  362179 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-460778-m02
	I0926 23:05:51.918818  362179 host.go:66] Checking if "multinode-460778-m02" exists ...
	I0926 23:05:51.919215  362179 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0926 23:05:51.919256  362179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-460778-m02
	I0926 23:05:51.935981  362179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21642-208519/.minikube/machines/multinode-460778-m02/id_rsa Username:docker}
	I0926 23:05:52.029414  362179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 23:05:52.041254  362179 status.go:176] multinode-460778-m02 status: &{Name:multinode-460778-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0926 23:05:52.041295  362179 status.go:174] checking status of multinode-460778-m03 ...
	I0926 23:05:52.041529  362179 cli_runner.go:164] Run: docker container inspect multinode-460778-m03 --format={{.State.Status}}
	I0926 23:05:52.059431  362179 status.go:371] multinode-460778-m03 host status = "Stopped" (err=<nil>)
	I0926 23:05:52.059452  362179 status.go:384] host is not running, skipping remaining checks
	I0926 23:05:52.059458  362179 status.go:176] multinode-460778-m03 status: &{Name:multinode-460778-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-460778 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-460778 node start m03 -v=5 --alsologtostderr: (6.590588118s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-460778 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.26s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (76.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-460778
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-460778
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-460778: (31.333635471s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-460778 --wait=true -v=5 --alsologtostderr
E0926 23:07:12.143258  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-460778 --wait=true -v=5 --alsologtostderr: (44.975750987s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-460778
--- PASS: TestMultiNode/serial/RestartKeepsNodes (76.41s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-460778 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-460778 node delete m03: (4.642033634s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-460778 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.22s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-460778 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-460778 stop: (28.428656748s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-460778 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-460778 status: exit status 7 (87.926163ms)

                                                
                                                
-- stdout --
	multinode-460778
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-460778-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-460778 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-460778 status --alsologtostderr: exit status 7 (83.828ms)

                                                
                                                
-- stdout --
	multinode-460778
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-460778-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 23:07:49.516053  372406 out.go:360] Setting OutFile to fd 1 ...
	I0926 23:07:49.516302  372406 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:07:49.516312  372406 out.go:374] Setting ErrFile to fd 2...
	I0926 23:07:49.516316  372406 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:07:49.516524  372406 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-208519/.minikube/bin
	I0926 23:07:49.516718  372406 out.go:368] Setting JSON to false
	I0926 23:07:49.516758  372406 mustload.go:65] Loading cluster: multinode-460778
	I0926 23:07:49.516841  372406 notify.go:220] Checking for updates...
	I0926 23:07:49.517218  372406 config.go:182] Loaded profile config "multinode-460778": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:07:49.517239  372406 status.go:174] checking status of multinode-460778 ...
	I0926 23:07:49.517668  372406 cli_runner.go:164] Run: docker container inspect multinode-460778 --format={{.State.Status}}
	I0926 23:07:49.536219  372406 status.go:371] multinode-460778 host status = "Stopped" (err=<nil>)
	I0926 23:07:49.536242  372406 status.go:384] host is not running, skipping remaining checks
	I0926 23:07:49.536250  372406 status.go:176] multinode-460778 status: &{Name:multinode-460778 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0926 23:07:49.536309  372406 status.go:174] checking status of multinode-460778-m02 ...
	I0926 23:07:49.536556  372406 cli_runner.go:164] Run: docker container inspect multinode-460778-m02 --format={{.State.Status}}
	I0926 23:07:49.552817  372406 status.go:371] multinode-460778-m02 host status = "Stopped" (err=<nil>)
	I0926 23:07:49.552838  372406 status.go:384] host is not running, skipping remaining checks
	I0926 23:07:49.552846  372406 status.go:176] multinode-460778-m02 status: &{Name:multinode-460778-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.60s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (46.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-460778 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-460778 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (45.547982016s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-460778 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (46.13s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-460778
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-460778-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-460778-m02 --driver=docker  --container-runtime=crio: exit status 14 (64.594428ms)

                                                
                                                
-- stdout --
	* [multinode-460778-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21642-208519/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-208519/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-460778-m02' is duplicated with machine name 'multinode-460778-m02' in profile 'multinode-460778'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-460778-m03 --driver=docker  --container-runtime=crio
E0926 23:08:43.694347  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/functional-383702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-460778-m03 --driver=docker  --container-runtime=crio: (21.054182133s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-460778
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-460778: exit status 80 (276.802899ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-460778 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-460778-m03 already exists in multinode-460778-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-460778-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-460778-m03: (2.342386502s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.79s)

                                                
                                    
x
+
TestPreload (108.51s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-344848 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-344848 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (47.394802481s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-344848 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-344848 image pull gcr.io/k8s-minikube/busybox: (2.524059732s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-344848
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-344848: (5.807854262s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-344848 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0926 23:10:06.765613  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/functional-383702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:10:15.218685  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-344848 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (50.156218861s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-344848 image list
helpers_test.go:175: Cleaning up "test-preload-344848" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-344848
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-344848: (2.394088602s)
--- PASS: TestPreload (108.51s)

                                                
                                    
x
+
TestScheduledStopUnix (95.69s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-352303 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-352303 --memory=3072 --driver=docker  --container-runtime=crio: (19.988701675s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-352303 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-352303 -n scheduled-stop-352303
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-352303 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0926 23:11:12.501904  212137 retry.go:31] will retry after 144.831µs: open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/scheduled-stop-352303/pid: no such file or directory
I0926 23:11:12.503072  212137 retry.go:31] will retry after 193.161µs: open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/scheduled-stop-352303/pid: no such file or directory
I0926 23:11:12.504233  212137 retry.go:31] will retry after 122.376µs: open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/scheduled-stop-352303/pid: no such file or directory
I0926 23:11:12.505308  212137 retry.go:31] will retry after 441.35µs: open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/scheduled-stop-352303/pid: no such file or directory
I0926 23:11:12.506378  212137 retry.go:31] will retry after 696.543µs: open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/scheduled-stop-352303/pid: no such file or directory
I0926 23:11:12.507510  212137 retry.go:31] will retry after 784.761µs: open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/scheduled-stop-352303/pid: no such file or directory
I0926 23:11:12.508645  212137 retry.go:31] will retry after 1.084888ms: open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/scheduled-stop-352303/pid: no such file or directory
I0926 23:11:12.509779  212137 retry.go:31] will retry after 909.148µs: open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/scheduled-stop-352303/pid: no such file or directory
I0926 23:11:12.510900  212137 retry.go:31] will retry after 1.464932ms: open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/scheduled-stop-352303/pid: no such file or directory
I0926 23:11:12.513138  212137 retry.go:31] will retry after 4.197379ms: open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/scheduled-stop-352303/pid: no such file or directory
I0926 23:11:12.518301  212137 retry.go:31] will retry after 8.381742ms: open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/scheduled-stop-352303/pid: no such file or directory
I0926 23:11:12.527504  212137 retry.go:31] will retry after 5.113068ms: open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/scheduled-stop-352303/pid: no such file or directory
I0926 23:11:12.533153  212137 retry.go:31] will retry after 17.832467ms: open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/scheduled-stop-352303/pid: no such file or directory
I0926 23:11:12.551426  212137 retry.go:31] will retry after 12.301156ms: open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/scheduled-stop-352303/pid: no such file or directory
I0926 23:11:12.564728  212137 retry.go:31] will retry after 37.864358ms: open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/scheduled-stop-352303/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-352303 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-352303 -n scheduled-stop-352303
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-352303
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-352303 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0926 23:12:12.150440  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-352303
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-352303: exit status 7 (69.681361ms)

                                                
                                                
-- stdout --
	scheduled-stop-352303
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-352303 -n scheduled-stop-352303
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-352303 -n scheduled-stop-352303: exit status 7 (70.069131ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-352303" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-352303
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-352303: (4.328918293s)
--- PASS: TestScheduledStopUnix (95.69s)

                                                
                                    
x
+
TestInsufficientStorage (9.37s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-440686 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-440686 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (6.934932282s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"87246599-b8e4-4e57-9c24-7121dff95313","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-440686] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"994742fb-f930-4b3b-9d1c-3b42540d0d57","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21642"}}
	{"specversion":"1.0","id":"168a6884-777f-40ce-8ce3-9927b388ca4c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8c140dff-7cbf-4cf0-a39b-b6044fc09f4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21642-208519/kubeconfig"}}
	{"specversion":"1.0","id":"099d7b22-f96f-4d0a-a583-bdbd6539bac1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-208519/.minikube"}}
	{"specversion":"1.0","id":"4944e215-46dd-47c2-893f-d7b55cb237d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"0ed9b071-398f-460d-9c02-9ba6c51700c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"34e94da3-a38c-41bd-9373-5d4148335ab8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"054e6967-8a92-4494-b6e7-018f079bfc3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"b1044b06-5adc-4db0-8b0b-e413be72af6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"70de49a3-d7dc-4ed2-8585-9d5c956ac623","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"7f034ef7-46de-4e53-b836-2d3a91eeb3ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-440686\" primary control-plane node in \"insufficient-storage-440686\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"df872114-0e57-4669-9e56-c8b3437f5c75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"9a28a54c-42dd-4823-b488-d1733e7dc180","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"577e6a38-d22e-44ce-ac74-826725d94f49","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-440686 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-440686 --output=json --layout=cluster: exit status 7 (277.160606ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-440686","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-440686","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0926 23:12:34.981781  394281 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-440686" does not appear in /home/jenkins/minikube-integration/21642-208519/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-440686 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-440686 --output=json --layout=cluster: exit status 7 (272.042282ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-440686","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-440686","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0926 23:12:35.254162  394384 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-440686" does not appear in /home/jenkins/minikube-integration/21642-208519/kubeconfig
	E0926 23:12:35.265606  394384 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/insufficient-storage-440686/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-440686" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-440686
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-440686: (1.886981263s)
--- PASS: TestInsufficientStorage (9.37s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (45.53s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1289670514 start -p running-upgrade-620941 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1289670514 start -p running-upgrade-620941 --memory=3072 --vm-driver=docker  --container-runtime=crio: (21.825227567s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-620941 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-620941 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (20.739195974s)
helpers_test.go:175: Cleaning up "running-upgrade-620941" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-620941
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-620941: (2.39720223s)
--- PASS: TestRunningBinaryUpgrade (45.53s)

                                                
                                    
x
+
TestKubernetesUpgrade (301.58s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-800294 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-800294 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.477892868s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-800294
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-800294: (2.845740458s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-800294 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-800294 status --format={{.Host}}: exit status 7 (75.588956ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-800294 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-800294 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m26.930956605s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-800294 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-800294 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-800294 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (73.286602ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-800294] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21642-208519/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-208519/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-800294
	    minikube start -p kubernetes-upgrade-800294 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8002942 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-800294 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-800294 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-800294 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.377115306s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-800294" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-800294
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-800294: (2.736365759s)
--- PASS: TestKubernetesUpgrade (301.58s)

                                                
                                    
x
+
TestMissingContainerUpgrade (84.57s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.3294126533 start -p missing-upgrade-608308 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.3294126533 start -p missing-upgrade-608308 --memory=3072 --driver=docker  --container-runtime=crio: (41.384565927s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-608308
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-608308: (1.791661412s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-608308
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-608308 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-608308 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (38.199374954s)
helpers_test.go:175: Cleaning up "missing-upgrade-608308" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-608308
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-608308: (2.498709605s)
--- PASS: TestMissingContainerUpgrade (84.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-753181 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-753181 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (87.826492ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-753181] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21642-208519/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-208519/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (32.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-753181 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-753181 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (32.393854429s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-753181 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (32.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (20.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-753181 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-753181 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (18.538165644s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-753181 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-753181 status -o json: exit status 2 (345.178955ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-753181","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-753181
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-753181: (2.083077637s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (20.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-753181 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-753181 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (10.879335095s)
--- PASS: TestNoKubernetes/serial/Start (10.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-753181 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-753181 "sudo systemctl is-active --quiet service kubelet": exit status 1 (275.98561ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
E0926 23:13:43.693885  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/functional-383702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNoKubernetes/serial/ProfileList (1.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-753181
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-753181: (1.203148874s)
--- PASS: TestNoKubernetes/serial/Stop (1.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-753181 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-753181 --driver=docker  --container-runtime=crio: (6.511731964s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-753181 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-753181 "sudo systemctl is-active --quiet service kubelet": exit status 1 (301.004356ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.5s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.50s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (36.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.917735048 start -p stopped-upgrade-227831 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.917735048 start -p stopped-upgrade-227831 --memory=3072 --vm-driver=docker  --container-runtime=crio: (20.02647575s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.917735048 -p stopped-upgrade-227831 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.917735048 -p stopped-upgrade-227831 stop: (1.937806637s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-227831 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-227831 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (14.486672899s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (36.45s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.03s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-227831
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-227831: (1.028255173s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.03s)

                                                
                                    
x
+
TestPause/serial/Start (70.08s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-974854 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-974854 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m10.081343356s)
--- PASS: TestPause/serial/Start (70.08s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.32s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-974854 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-974854 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (7.302741306s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.32s)

                                                
                                    
x
+
TestPause/serial/Pause (0.64s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-974854 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.64s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-974854 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-974854 --output=json --layout=cluster: exit status 2 (305.846606ms)

                                                
                                                
-- stdout --
	{"Name":"pause-974854","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-974854","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.63s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-974854 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.63s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.7s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-974854 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.70s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.71s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-974854 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-974854 --alsologtostderr -v=5: (2.712791365s)
--- PASS: TestPause/serial/DeletePaused (2.71s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (13.79s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (13.728156031s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-974854
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-974854: exit status 1 (20.853902ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-974854: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (13.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-227717 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-227717 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (153.94628ms)

                                                
                                                
-- stdout --
	* [false-227717] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21642-208519/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-208519/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 23:16:10.984512  448195 out.go:360] Setting OutFile to fd 1 ...
	I0926 23:16:10.984808  448195 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:16:10.984818  448195 out.go:374] Setting ErrFile to fd 2...
	I0926 23:16:10.984822  448195 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0926 23:16:10.985037  448195 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21642-208519/.minikube/bin
	I0926 23:16:10.985542  448195 out.go:368] Setting JSON to false
	I0926 23:16:10.986774  448195 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":10720,"bootTime":1758917851,"procs":257,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0926 23:16:10.986909  448195 start.go:140] virtualization: kvm guest
	I0926 23:16:10.989058  448195 out.go:179] * [false-227717] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0926 23:16:10.990389  448195 out.go:179]   - MINIKUBE_LOCATION=21642
	I0926 23:16:10.990441  448195 notify.go:220] Checking for updates...
	I0926 23:16:10.993280  448195 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 23:16:10.994476  448195 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21642-208519/kubeconfig
	I0926 23:16:10.995709  448195 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21642-208519/.minikube
	I0926 23:16:10.997292  448195 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0926 23:16:10.998443  448195 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 23:16:11.000175  448195 config.go:182] Loaded profile config "cert-expiration-778862": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:16:11.000351  448195 config.go:182] Loaded profile config "kubernetes-upgrade-800294": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0926 23:16:11.000513  448195 driver.go:421] Setting default libvirt URI to qemu:///system
	I0926 23:16:11.023459  448195 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0926 23:16:11.023548  448195 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0926 23:16:11.083072  448195 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-26 23:16:11.071364434 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0926 23:16:11.083238  448195 docker.go:318] overlay module found
	I0926 23:16:11.084869  448195 out.go:179] * Using the docker driver based on user configuration
	I0926 23:16:11.086018  448195 start.go:304] selected driver: docker
	I0926 23:16:11.086035  448195 start.go:924] validating driver "docker" against <nil>
	I0926 23:16:11.086051  448195 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 23:16:11.087857  448195 out.go:203] 
	W0926 23:16:11.088875  448195 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0926 23:16:11.089943  448195 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-227717 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-227717

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-227717

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-227717

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-227717

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-227717

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-227717

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-227717

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-227717

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-227717

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-227717

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-227717"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-227717"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-227717"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-227717

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-227717"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-227717"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-227717" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-227717" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-227717" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-227717" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-227717" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-227717" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-227717" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-227717" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-227717"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-227717"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-227717"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-227717"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-227717"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-227717" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-227717" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-227717" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-227717"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-227717"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-227717"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-227717"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-227717"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21642-208519/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 26 Sep 2025 23:13:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-778862
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21642-208519/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 26 Sep 2025 23:14:07 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: kubernetes-upgrade-800294
contexts:
- context:
cluster: cert-expiration-778862
extensions:
- extension:
last-update: Fri, 26 Sep 2025 23:13:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-778862
name: cert-expiration-778862
- context:
cluster: kubernetes-upgrade-800294
user: kubernetes-upgrade-800294
name: kubernetes-upgrade-800294
current-context: ""
kind: Config
users:
- name: cert-expiration-778862
user:
client-certificate: /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/cert-expiration-778862/client.crt
client-key: /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/cert-expiration-778862/client.key
- name: kubernetes-upgrade-800294
user:
client-certificate: /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/kubernetes-upgrade-800294/client.crt
client-key: /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/kubernetes-upgrade-800294/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-227717

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-227717"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-227717"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-227717"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-227717"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-227717"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-227717"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-227717"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-227717"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-227717"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-227717"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-227717"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-227717"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-227717"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-227717"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-227717"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-227717"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-227717"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-227717"

                                                
                                                
----------------------- debugLogs end: false-227717 [took: 3.161065978s] --------------------------------
helpers_test.go:175: Cleaning up "false-227717" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-227717
--- PASS: TestNetworkPlugins/group/false (3.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (52.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-413278 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-413278 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (52.445429627s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (52.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (40.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-577629 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-577629 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (40.111509865s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (40.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (54.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-639473 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-639473 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (54.642463495s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (54.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-577629 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [05b3d92b-1a21-478d-b8c3-0ecdfd7eab10] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [05b3d92b-1a21-478d-b8c3-0ecdfd7eab10] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004490864s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-577629 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-413278 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [712a83a0-e691-4b04-b2dd-dc71e3ad52e8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [712a83a0-e691-4b04-b2dd-dc71e3ad52e8] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.00378969s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-413278 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-577629 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-577629 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (18.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-577629 --alsologtostderr -v=3
E0926 23:17:12.143045  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-577629 --alsologtostderr -v=3: (18.176152578s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (18.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-413278 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-413278 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-413278 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-413278 --alsologtostderr -v=3: (16.116453688s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-639473 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9ee619f6-4e26-4239-9951-510706e7c0ff] Pending
helpers_test.go:352: "busybox" [9ee619f6-4e26-4239-9951-510706e7c0ff] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [9ee619f6-4e26-4239-9951-510706e7c0ff] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004147922s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-639473 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-577629 -n embed-certs-577629
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-577629 -n embed-certs-577629: exit status 7 (71.333248ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-577629 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (47.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-577629 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-577629 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (47.402579592s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-577629 -n embed-certs-577629
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (47.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-639473 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-639473 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-639473 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-639473 --alsologtostderr -v=3: (16.687903348s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-413278 -n old-k8s-version-413278
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-413278 -n old-k8s-version-413278: exit status 7 (68.863115ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-413278 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (48.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-413278 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-413278 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (47.67087786s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-413278 -n old-k8s-version-413278
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (48.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-639473 -n no-preload-639473
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-639473 -n no-preload-639473: exit status 7 (75.883169ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-639473 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (48.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-639473 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-639473 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (48.329291029s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-639473 -n no-preload-639473
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (48.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-fzb7k" [8057f16f-b347-404d-9763-e899167bdee2] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004530751s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-r2x7x" [45d94bbe-6873-400e-8887-64fe0b6c4ca5] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003709946s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-fzb7k" [8057f16f-b347-404d-9763-e899167bdee2] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004215098s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-577629 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-r2x7x" [45d94bbe-6873-400e-8887-64fe0b6c4ca5] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00385756s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-413278 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-577629 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-577629 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-577629 -n embed-certs-577629
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-577629 -n embed-certs-577629: exit status 2 (330.026108ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-577629 -n embed-certs-577629
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-577629 -n embed-certs-577629: exit status 2 (354.01914ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-577629 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-577629 -n embed-certs-577629
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-577629 -n embed-certs-577629
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-413278 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (70.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-441435 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-441435 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m10.154581544s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (70.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-413278 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-413278 -n old-k8s-version-413278
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-413278 -n old-k8s-version-413278: exit status 2 (350.135042ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-413278 -n old-k8s-version-413278
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-413278 -n old-k8s-version-413278: exit status 2 (365.147459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-413278 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-413278 -n old-k8s-version-413278
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-413278 -n old-k8s-version-413278
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (33.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-499131 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-499131 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (33.757554161s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (33.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-k7pnq" [ce370770-3daa-4e0d-92d1-028798ce96f0] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003973693s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (71.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-227717 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-227717 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m11.731947654s)
--- PASS: TestNetworkPlugins/group/auto/Start (71.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-k7pnq" [ce370770-3daa-4e0d-92d1-028798ce96f0] Running
E0926 23:18:43.694333  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/functional-383702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00441984s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-639473 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-639473 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-639473 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-639473 -n no-preload-639473
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-639473 -n no-preload-639473: exit status 2 (396.203389ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-639473 -n no-preload-639473
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-639473 -n no-preload-639473: exit status 2 (372.964848ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-639473 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-639473 -n no-preload-639473
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-639473 -n no-preload-639473
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (70.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-227717 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-227717 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m10.950169243s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (70.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-499131 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-499131 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-499131 --alsologtostderr -v=3: (2.43780454s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-499131 -n newest-cni-499131
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-499131 -n newest-cni-499131: exit status 7 (87.415623ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-499131 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (11.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-499131 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-499131 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (10.713182742s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-499131 -n newest-cni-499131
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (11.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-499131 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-499131 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-499131 -n newest-cni-499131
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-499131 -n newest-cni-499131: exit status 2 (299.677472ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-499131 -n newest-cni-499131
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-499131 -n newest-cni-499131: exit status 2 (307.347189ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-499131 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-499131 -n newest-cni-499131
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-499131 -n newest-cni-499131
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-441435 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a80bb137-ad79-4a83-aba2-7aaddb51dc45] Pending
helpers_test.go:352: "busybox" [a80bb137-ad79-4a83-aba2-7aaddb51dc45] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a80bb137-ad79-4a83-aba2-7aaddb51dc45] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004249s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-441435 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-441435 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-441435 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (18.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-441435 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-441435 --alsologtostderr -v=3: (18.116185326s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (18.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-227717 "pgrep -a kubelet"
I0926 23:19:53.008041  212137 config.go:182] Loaded profile config "auto-227717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-227717 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hxj2v" [c0c22f8f-9b87-43ec-bd13-ac12ed5af814] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hxj2v" [c0c22f8f-9b87-43ec-bd13-ac12ed5af814] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.004305071s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-227717 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-227717 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-227717 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-xp4dz" [fee0ef20-d1e3-4aae-b5bf-eae0127c38cf] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004116064s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-441435 -n default-k8s-diff-port-441435
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-441435 -n default-k8s-diff-port-441435: exit status 7 (79.142661ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-441435 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-441435 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-441435 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (50.26822022s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-441435 -n default-k8s-diff-port-441435
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-227717 "pgrep -a kubelet"
I0926 23:20:10.429296  212137 config.go:182] Loaded profile config "kindnet-227717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-227717 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-87p6l" [059de83d-665d-4cfb-8418-8fe31c8d99bf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-87p6l" [059de83d-665d-4cfb-8418-8fe31c8d99bf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.004524753s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-227717 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-227717 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-227717 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (46.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-227717 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-227717 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (46.239864174s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (46.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (59.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-227717 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-227717 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (59.710620859s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (59.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-227717 "pgrep -a kubelet"
I0926 23:21:07.598278  212137 config.go:182] Loaded profile config "custom-flannel-227717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-227717 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-njxp2" [0cd5dc03-9cd8-46a9-90ee-19bd78c1f747] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-njxp2" [0cd5dc03-9cd8-46a9-90ee-19bd78c1f747] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.004118751s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-227717 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-227717 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-227717 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (46.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-227717 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-227717 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (46.435242336s)
--- PASS: TestNetworkPlugins/group/flannel/Start (46.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-227717 "pgrep -a kubelet"
I0926 23:21:38.843489  212137 config.go:182] Loaded profile config "enable-default-cni-227717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-227717 replace --force -f testdata/netcat-deployment.yaml
I0926 23:21:39.249291  212137 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I0926 23:21:39.384597  212137 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2mxww" [39c83427-fa6d-4f92-86bb-fcf495aefc13] Pending
helpers_test.go:352: "netcat-cd4db9dbf-2mxww" [39c83427-fa6d-4f92-86bb-fcf495aefc13] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2mxww" [39c83427-fa6d-4f92-86bb-fcf495aefc13] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003775423s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-227717 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-227717 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-227717 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (32.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-227717 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0926 23:22:08.344954  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/old-k8s-version-413278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:22:10.906980  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/old-k8s-version-413278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:22:12.143103  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/addons-341571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:22:16.029064  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/old-k8s-version-413278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:22:19.818131  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/no-preload-639473/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:22:19.824525  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/no-preload-639473/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:22:19.835873  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/no-preload-639473/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:22:19.857248  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/no-preload-639473/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:22:19.898622  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/no-preload-639473/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:22:19.980116  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/no-preload-639473/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:22:20.141563  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/no-preload-639473/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:22:20.463098  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/no-preload-639473/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:22:21.104786  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/no-preload-639473/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-227717 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (32.976262921s)
--- PASS: TestNetworkPlugins/group/bridge/Start (32.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-xmz9f" [1107b8d7-ee40-4882-bdd3-eb5c823de30f] Running
E0926 23:22:22.386795  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/no-preload-639473/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:22:24.949197  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/no-preload-639473/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0926 23:22:26.271044  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/old-k8s-version-413278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003569298s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-227717 "pgrep -a kubelet"
I0926 23:22:28.419395  212137 config.go:182] Loaded profile config "flannel-227717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-227717 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6glmc" [c747902e-4d74-4857-b39a-f16d25b6e21b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0926 23:22:30.071165  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/no-preload-639473/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-6glmc" [c747902e-4d74-4857-b39a-f16d25b6e21b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003269518s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-227717 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-227717 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-227717 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-227717 "pgrep -a kubelet"
I0926 23:22:41.489002  212137 config.go:182] Loaded profile config "bridge-227717": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-227717 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xlvh2" [0bfbd415-9c0b-4bed-b480-7f81410e9bbb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-xlvh2" [0bfbd415-9c0b-4bed-b480-7f81410e9bbb] Running
E0926 23:22:46.752446  212137 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/old-k8s-version-413278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.002926484s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-227717 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-227717 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-227717 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-441435 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-441435 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-441435 -n default-k8s-diff-port-441435
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-441435 -n default-k8s-diff-port-441435: exit status 2 (293.713319ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-441435 -n default-k8s-diff-port-441435
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-441435 -n default-k8s-diff-port-441435: exit status 2 (294.638321ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-441435 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-441435 -n default-k8s-diff-port-441435
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-441435 -n default-k8s-diff-port-441435
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.63s)

                                                
                                    

Test skip (27/326)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.28s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-341571 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.28s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-843780" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-843780
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-227717 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-227717

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-227717

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-227717

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-227717

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-227717

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-227717

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-227717

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-227717

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-227717

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-227717

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-227717"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-227717"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-227717"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-227717

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-227717"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-227717"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-227717" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-227717" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-227717" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-227717" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-227717" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-227717" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-227717" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-227717" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-227717"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-227717"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-227717"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-227717"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-227717"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-227717" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-227717" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-227717" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-227717"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-227717"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-227717"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-227717"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-227717"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21642-208519/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 26 Sep 2025 23:13:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-778862
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21642-208519/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 26 Sep 2025 23:14:07 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: kubernetes-upgrade-800294
contexts:
- context:
cluster: cert-expiration-778862
extensions:
- extension:
last-update: Fri, 26 Sep 2025 23:13:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-778862
name: cert-expiration-778862
- context:
cluster: kubernetes-upgrade-800294
user: kubernetes-upgrade-800294
name: kubernetes-upgrade-800294
current-context: ""
kind: Config
users:
- name: cert-expiration-778862
user:
client-certificate: /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/cert-expiration-778862/client.crt
client-key: /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/cert-expiration-778862/client.key
- name: kubernetes-upgrade-800294
user:
client-certificate: /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/kubernetes-upgrade-800294/client.crt
client-key: /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/kubernetes-upgrade-800294/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-227717

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-227717"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-227717"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-227717"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-227717"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-227717"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-227717"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-227717"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-227717"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-227717"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-227717"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-227717"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-227717"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-227717"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-227717"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-227717"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-227717"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-227717"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-227717"

                                                
                                                
----------------------- debugLogs end: kubenet-227717 [took: 2.845923434s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-227717" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-227717
--- SKIP: TestNetworkPlugins/group/kubenet (2.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-227717 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-227717

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-227717

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-227717

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-227717

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-227717

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-227717

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-227717

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-227717

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-227717

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-227717

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-227717"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-227717"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-227717"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-227717

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-227717"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-227717"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-227717" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-227717" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-227717" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-227717" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-227717" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-227717" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-227717" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-227717" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-227717"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-227717"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-227717"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-227717"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-227717"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-227717

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-227717

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-227717" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-227717" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-227717

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-227717

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-227717" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-227717" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-227717" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-227717" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-227717" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-227717"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-227717"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-227717"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-227717"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-227717"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21642-208519/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 26 Sep 2025 23:13:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-778862
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21642-208519/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 26 Sep 2025 23:14:07 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: kubernetes-upgrade-800294
contexts:
- context:
cluster: cert-expiration-778862
extensions:
- extension:
last-update: Fri, 26 Sep 2025 23:13:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-778862
name: cert-expiration-778862
- context:
cluster: kubernetes-upgrade-800294
user: kubernetes-upgrade-800294
name: kubernetes-upgrade-800294
current-context: ""
kind: Config
users:
- name: cert-expiration-778862
user:
client-certificate: /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/cert-expiration-778862/client.crt
client-key: /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/cert-expiration-778862/client.key
- name: kubernetes-upgrade-800294
user:
client-certificate: /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/kubernetes-upgrade-800294/client.crt
client-key: /home/jenkins/minikube-integration/21642-208519/.minikube/profiles/kubernetes-upgrade-800294/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-227717

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-227717"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-227717"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-227717"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-227717"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-227717"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-227717"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-227717"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-227717"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-227717"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-227717"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-227717"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-227717"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-227717"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-227717"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-227717"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-227717"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-227717"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-227717" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-227717"

                                                
                                                
----------------------- debugLogs end: cilium-227717 [took: 5.371201221s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-227717" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-227717
--- SKIP: TestNetworkPlugins/group/cilium (5.53s)

                                                
                                    
Copied to clipboard